Test Report: Docker_Linux_crio 21934

                    
                      0ee4f00f81c855d6dbc5c3cb2cb1b494940d38dc:2025-11-22:42437
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 12.7
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 148.38
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.29
41 TestAddons/parallel/CSI 48.79
42 TestAddons/parallel/Headlamp 2.37
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 9.17
45 TestAddons/parallel/NvidiaDevicePlugin 6.23
46 TestAddons/parallel/Yakd 6.25
47 TestAddons/parallel/AmdGpuDevicePlugin 6.23
97 TestFunctional/parallel/ServiceCmdConnect 602.83
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.99
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.51
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 2.44
197 TestJSONOutput/unpause/Command 1.88
270 TestPause/serial/Pause 7.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.04
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2
313 TestStartStop/group/old-k8s-version/serial/Pause 5.5
319 TestStartStop/group/no-preload/serial/Pause 5.37
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.93
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.36
338 TestStartStop/group/newest-cni/serial/Pause 6.06
346 TestStartStop/group/embed-certs/serial/Pause 6.41
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.61
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable volcano --alsologtostderr -v=1: exit status 11 (237.421624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:18.988363   23875 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:18.988669   23875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:18.988680   23875 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:18.988685   23875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:18.988872   23875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:18.989208   23875 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:18.989666   23875 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:18.989690   23875 addons.go:622] checking whether the cluster is paused
	I1121 23:48:18.989788   23875 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:18.989802   23875 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:18.990241   23875 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:19.008021   23875 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:19.008078   23875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:19.024594   23875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:19.114120   23875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:19.114212   23875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:19.142617   23875 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:19.142641   23875 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:19.142645   23875 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:19.142648   23875 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:19.142657   23875 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:19.142661   23875 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:19.142664   23875 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:19.142667   23875 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:19.142670   23875 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:19.142680   23875 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:19.142683   23875 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:19.142686   23875 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:19.142689   23875 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:19.142692   23875 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:19.142695   23875 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:19.142707   23875 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:19.142715   23875 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:19.142720   23875 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:19.142722   23875 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:19.142725   23875 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:19.142728   23875 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:19.142730   23875 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:19.142733   23875 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:19.142736   23875 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:19.142738   23875 cri.go:89] found id: ""
	I1121 23:48:19.142784   23875 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:19.156152   23875 out.go:203] 
	W1121 23:48:19.157218   23875 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:19.157233   23875 out.go:285] * 
	* 
	W1121 23:48:19.160160   23875 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:19.161249   23875 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.217329ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002308999s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0037771s
addons_test.go:392: (dbg) Run:  kubectl --context addons-386094 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-386094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-386094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.261848334s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 ip
2025/11/21 23:48:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable registry --alsologtostderr -v=1: exit status 11 (230.106712ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:39.454214   26069 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:39.454352   26069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:39.454361   26069 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:39.454365   26069 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:39.454570   26069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:39.454813   26069 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:39.455155   26069 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:39.455172   26069 addons.go:622] checking whether the cluster is paused
	I1121 23:48:39.455255   26069 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:39.455267   26069 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:39.455634   26069 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:39.472825   26069 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:39.472873   26069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:39.490169   26069 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:39.579222   26069 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:39.579323   26069 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:39.607175   26069 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:39.607195   26069 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:39.607199   26069 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:39.607203   26069 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:39.607206   26069 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:39.607209   26069 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:39.607212   26069 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:39.607215   26069 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:39.607217   26069 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:39.607223   26069 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:39.607226   26069 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:39.607228   26069 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:39.607231   26069 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:39.607235   26069 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:39.607237   26069 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:39.607244   26069 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:39.607249   26069 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:39.607254   26069 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:39.607257   26069 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:39.607260   26069 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:39.607264   26069 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:39.607267   26069 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:39.607270   26069 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:39.607275   26069 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:39.607278   26069 cri.go:89] found id: ""
	I1121 23:48:39.607315   26069 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:39.620402   26069 out.go:203] 
	W1121 23:48:39.621477   26069 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:39.621495   26069 out.go:285] * 
	* 
	W1121 23:48:39.624992   26069 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:39.626159   26069 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.70s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.52932ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-386094
addons_test.go:332: (dbg) Run:  kubectl --context addons-386094 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (228.816394ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:44.926909   27074 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:44.927075   27074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.927087   27074 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:44.927094   27074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.927353   27074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:44.927648   27074 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:44.928015   27074 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.928031   27074 addons.go:622] checking whether the cluster is paused
	I1121 23:48:44.928133   27074 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.928157   27074 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:44.928564   27074 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:44.945910   27074 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:44.945960   27074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:44.962333   27074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:45.049805   27074 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:45.049887   27074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:45.077318   27074 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:45.077338   27074 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:45.077343   27074 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:45.077346   27074 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:45.077349   27074 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:45.077353   27074 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:45.077356   27074 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:45.077358   27074 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:45.077361   27074 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:45.077365   27074 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:45.077368   27074 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:45.077371   27074 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:45.077374   27074 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:45.077376   27074 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:45.077380   27074 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:45.077390   27074 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:45.077400   27074 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:45.077406   27074 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:45.077411   27074 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:45.077416   27074 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:45.077429   27074 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:45.077434   27074 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:45.077438   27074 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:45.077443   27074 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:45.077450   27074 cri.go:89] found id: ""
	I1121 23:48:45.077486   27074 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:45.091814   27074 out.go:203] 
	W1121 23:48:45.092927   27074 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:45.092941   27074 out.go:285] * 
	* 
	W1121 23:48:45.095893   27074 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:45.096953   27074 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-386094 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-386094 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-386094 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a3c80541-e305-45f2-9785-8d279acce1e7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a3c80541-e305-45f2-9785-8d279acce1e7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004041778s
I1121 23:48:48.904504   14585 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.08485833s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-386094 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-386094
helpers_test.go:243: (dbg) docker inspect addons-386094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9",
	        "Created": "2025-11-21T23:46:41.209448742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:46:41.245417207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/hosts",
	        "LogPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9-json.log",
	        "Name": "/addons-386094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-386094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-386094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9",
	                "LowerDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-386094",
	                "Source": "/var/lib/docker/volumes/addons-386094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-386094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-386094",
	                "name.minikube.sigs.k8s.io": "addons-386094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a9786f385a998a88fa0e13a81952822a6fa54e1ae03327219d6b49a8ca7d36ff",
	            "SandboxKey": "/var/run/docker/netns/a9786f385a99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-386094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "610e21f260125ec5deb3917faa6075970dd2aa22a45c1761187c501484dce43e",
	                    "EndpointID": "6892ee1c5b1a406430c54c7c4ca8097888c2182216d49acd1126d847e7e71d54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "b6:1c:a4:ed:83:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-386094",
	                        "0a5362377949"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-386094 -n addons-386094
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-386094 logs -n 25: (1.071144936s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-604163 --alsologtostderr --binary-mirror http://127.0.0.1:33157 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-604163 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-604163                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-604163 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-386094                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-386094                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ start   │ -p addons-386094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-386094 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ enable headlamp -p addons-386094 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ ip      │ addons-386094 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-386094 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ ssh     │ addons-386094 ssh cat /opt/local-path-provisioner/pvc-60032366-7407-48ad-af71-327345f784b4_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-386094 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-386094                                                                                                                                                                                                                                                                                                                                                                                           │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-386094 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ ssh     │ addons-386094 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │                     │
	│ addons  │ addons-386094 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:49 UTC │                     │
	│ ip      │ addons-386094 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-386094        │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:18.292103   15929 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:18.292387   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:18.292399   15929 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:18.292404   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:18.292607   15929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:46:18.293093   15929 out.go:368] Setting JSON to false
	I1121 23:46:18.293959   15929 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1727,"bootTime":1763767051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:18.294011   15929 start.go:143] virtualization: kvm guest
	I1121 23:46:18.295677   15929 out.go:179] * [addons-386094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:18.296745   15929 notify.go:221] Checking for updates...
	I1121 23:46:18.296750   15929 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:46:18.297894   15929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:18.299274   15929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:18.300355   15929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:46:18.301474   15929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:46:18.302426   15929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:46:18.303568   15929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:18.325487   15929 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:18.325629   15929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:18.383350   15929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:18.374065415 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:18.383490   15929 docker.go:319] overlay module found
	I1121 23:46:18.385043   15929 out.go:179] * Using the docker driver based on user configuration
	I1121 23:46:18.385957   15929 start.go:309] selected driver: docker
	I1121 23:46:18.385970   15929 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:18.385983   15929 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:46:18.386727   15929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:18.439164   15929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:18.430618832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:18.439331   15929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:18.439542   15929 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:46:18.440908   15929 out.go:179] * Using Docker driver with root privileges
	I1121 23:46:18.441850   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:18.441904   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:18.441913   15929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:18.441968   15929 start.go:353] cluster config:
	{Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 23:46:18.443112   15929 out.go:179] * Starting "addons-386094" primary control-plane node in "addons-386094" cluster
	I1121 23:46:18.444026   15929 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:46:18.445018   15929 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:46:18.445992   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:18.446026   15929 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:18.446038   15929 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:18.446124   15929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:46:18.446156   15929 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 23:46:18.446168   15929 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:46:18.446551   15929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json ...
	I1121 23:46:18.446581   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json: {Name:mkb89b922b64e005a66f42b0754d650cb040a056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:18.461600   15929 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:46:18.461699   15929 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:46:18.461714   15929 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:46:18.461718   15929 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:46:18.461724   15929 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:46:18.461731   15929 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from local cache
	I1121 23:46:30.292793   15929 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from cached tarball
	I1121 23:46:30.292842   15929 cache.go:243] Successfully downloaded all kic artifacts
	I1121 23:46:30.292873   15929 start.go:360] acquireMachinesLock for addons-386094: {Name:mk78cef021a6236ff8b6ca4fd56cc6d4acfe96b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:30.292959   15929 start.go:364] duration metric: took 68.729µs to acquireMachinesLock for "addons-386094"
	I1121 23:46:30.292982   15929 start.go:93] Provisioning new machine with config: &{Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:30.293046   15929 start.go:125] createHost starting for "" (driver="docker")
	I1121 23:46:30.295251   15929 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 23:46:30.295486   15929 start.go:159] libmachine.API.Create for "addons-386094" (driver="docker")
	I1121 23:46:30.295516   15929 client.go:173] LocalClient.Create starting
	I1121 23:46:30.295605   15929 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1121 23:46:30.378245   15929 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1121 23:46:30.423361   15929 cli_runner.go:164] Run: docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 23:46:30.439969   15929 cli_runner.go:211] docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 23:46:30.440067   15929 network_create.go:284] running [docker network inspect addons-386094] to gather additional debugging logs...
	I1121 23:46:30.440092   15929 cli_runner.go:164] Run: docker network inspect addons-386094
	W1121 23:46:30.455228   15929 cli_runner.go:211] docker network inspect addons-386094 returned with exit code 1
	I1121 23:46:30.455251   15929 network_create.go:287] error running [docker network inspect addons-386094]: docker network inspect addons-386094: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-386094 not found
	I1121 23:46:30.455265   15929 network_create.go:289] output of [docker network inspect addons-386094]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-386094 not found
	
	** /stderr **
	I1121 23:46:30.455344   15929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:46:30.471011   15929 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d6f4f0}
	I1121 23:46:30.471041   15929 network_create.go:124] attempt to create docker network addons-386094 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 23:46:30.471103   15929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-386094 addons-386094
	I1121 23:46:30.515114   15929 network_create.go:108] docker network addons-386094 192.168.49.0/24 created
	I1121 23:46:30.515142   15929 kic.go:121] calculated static IP "192.168.49.2" for the "addons-386094" container
	I1121 23:46:30.515193   15929 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 23:46:30.529426   15929 cli_runner.go:164] Run: docker volume create addons-386094 --label name.minikube.sigs.k8s.io=addons-386094 --label created_by.minikube.sigs.k8s.io=true
	I1121 23:46:30.545238   15929 oci.go:103] Successfully created a docker volume addons-386094
	I1121 23:46:30.545300   15929 cli_runner.go:164] Run: docker run --rm --name addons-386094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --entrypoint /usr/bin/test -v addons-386094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1121 23:46:36.901445   15929 cli_runner.go:217] Completed: docker run --rm --name addons-386094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --entrypoint /usr/bin/test -v addons-386094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (6.356071954s)
	I1121 23:46:36.901480   15929 oci.go:107] Successfully prepared a docker volume addons-386094
	I1121 23:46:36.901549   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:36.901561   15929 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 23:46:36.901633   15929 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-386094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 23:46:41.132842   15929 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-386094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.231119506s)
	I1121 23:46:41.132879   15929 kic.go:203] duration metric: took 4.231314624s to extract preloaded images to volume ...
	W1121 23:46:41.132969   15929 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 23:46:41.133016   15929 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 23:46:41.133091   15929 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 23:46:41.192691   15929 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-386094 --name addons-386094 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-386094 --network addons-386094 --ip 192.168.49.2 --volume addons-386094:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1121 23:46:41.490558   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Running}}
	I1121 23:46:41.508982   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.526396   15929 cli_runner.go:164] Run: docker exec addons-386094 stat /var/lib/dpkg/alternatives/iptables
	I1121 23:46:41.577985   15929 oci.go:144] the created container "addons-386094" has a running status.
	I1121 23:46:41.578016   15929 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa...
	I1121 23:46:41.733285   15929 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 23:46:41.758148   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.779276   15929 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 23:46:41.779295   15929 kic_runner.go:114] Args: [docker exec --privileged addons-386094 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 23:46:41.827416   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.846588   15929 machine.go:94] provisionDockerMachine start ...
	I1121 23:46:41.846672   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:41.865609   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:41.865967   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:41.865989   15929 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:46:41.986351   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-386094
	
	I1121 23:46:41.986383   15929 ubuntu.go:182] provisioning hostname "addons-386094"
	I1121 23:46:41.986445   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.003676   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.003913   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.003936   15929 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-386094 && echo "addons-386094" | sudo tee /etc/hostname
	I1121 23:46:42.132374   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-386094
	
	I1121 23:46:42.132468   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.149711   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.149981   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.150007   15929 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-386094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-386094/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-386094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:46:42.266783   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:46:42.266815   15929 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1121 23:46:42.266837   15929 ubuntu.go:190] setting up certificates
	I1121 23:46:42.266848   15929 provision.go:84] configureAuth start
	I1121 23:46:42.266905   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:42.283092   15929 provision.go:143] copyHostCerts
	I1121 23:46:42.283169   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1121 23:46:42.283345   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1121 23:46:42.283452   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1121 23:46:42.283543   15929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.addons-386094 san=[127.0.0.1 192.168.49.2 addons-386094 localhost minikube]
	I1121 23:46:42.388960   15929 provision.go:177] copyRemoteCerts
	I1121 23:46:42.389025   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:46:42.389086   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.405124   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:42.493139   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:46:42.510404   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:46:42.525680   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:46:42.540716   15929 provision.go:87] duration metric: took 273.859034ms to configureAuth
	I1121 23:46:42.540737   15929 ubuntu.go:206] setting minikube options for container-runtime
	I1121 23:46:42.540875   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:42.540964   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.558045   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.558310   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.558327   15929 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:46:42.797870   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:46:42.797894   15929 machine.go:97] duration metric: took 951.286104ms to provisionDockerMachine
	I1121 23:46:42.797908   15929 client.go:176] duration metric: took 12.502380531s to LocalClient.Create
	I1121 23:46:42.797922   15929 start.go:167] duration metric: took 12.502437401s to libmachine.API.Create "addons-386094"
	I1121 23:46:42.797929   15929 start.go:293] postStartSetup for "addons-386094" (driver="docker")
	I1121 23:46:42.797940   15929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:46:42.797999   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:46:42.798037   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.814510   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:42.903494   15929 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:46:42.906654   15929 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 23:46:42.906684   15929 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 23:46:42.906697   15929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1121 23:46:42.906753   15929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1121 23:46:42.906785   15929 start.go:296] duration metric: took 108.849723ms for postStartSetup
	I1121 23:46:42.907086   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:42.923462   15929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json ...
	I1121 23:46:42.923694   15929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:46:42.923732   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.939466   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.023230   15929 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 23:46:43.027662   15929 start.go:128] duration metric: took 12.734585583s to createHost
	I1121 23:46:43.027683   15929 start.go:83] releasing machines lock for "addons-386094", held for 12.734711993s
	I1121 23:46:43.027750   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:43.043675   15929 ssh_runner.go:195] Run: cat /version.json
	I1121 23:46:43.043720   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:43.043794   15929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:46:43.043854   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:43.060551   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.060949   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.198978   15929 ssh_runner.go:195] Run: systemctl --version
	I1121 23:46:43.204608   15929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:46:43.234919   15929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:46:43.238856   15929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:46:43.238914   15929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:46:43.261899   15929 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 23:46:43.261922   15929 start.go:496] detecting cgroup driver to use...
	I1121 23:46:43.261965   15929 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 23:46:43.262008   15929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:46:43.275337   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:46:43.285713   15929 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:46:43.285760   15929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:46:43.299864   15929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:46:43.315045   15929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:46:43.386339   15929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:46:43.469439   15929 docker.go:234] disabling docker service ...
	I1121 23:46:43.469500   15929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:46:43.484915   15929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:46:43.495673   15929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:46:43.575976   15929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:46:43.650631   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:46:43.661352   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:46:43.673818   15929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:46:43.673870   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.682806   15929 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 23:46:43.682845   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.690423   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.697825   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.705363   15929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:46:43.712719   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.720133   15929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.731825   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.739337   15929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:46:43.745760   15929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 23:46:43.745806   15929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 23:46:43.756159   15929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:46:43.763083   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:43.835464   15929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:46:43.960021   15929 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:46:43.960116   15929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:46:43.963715   15929 start.go:564] Will wait 60s for crictl version
	I1121 23:46:43.963774   15929 ssh_runner.go:195] Run: which crictl
	I1121 23:46:43.966834   15929 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 23:46:43.989202   15929 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 23:46:43.989285   15929 ssh_runner.go:195] Run: crio --version
	I1121 23:46:44.014427   15929 ssh_runner.go:195] Run: crio --version
	I1121 23:46:44.041204   15929 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 23:46:44.042335   15929 cli_runner.go:164] Run: docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:46:44.058518   15929 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 23:46:44.062074   15929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:46:44.071550   15929 kubeadm.go:884] updating cluster {Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:46:44.071661   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:44.071697   15929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:46:44.100242   15929 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:46:44.100257   15929 crio.go:433] Images already preloaded, skipping extraction
	I1121 23:46:44.100291   15929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:46:44.123156   15929 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:46:44.123172   15929 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:46:44.123179   15929 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 23:46:44.123261   15929 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-386094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:46:44.123316   15929 ssh_runner.go:195] Run: crio config
	I1121 23:46:44.162965   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:44.162985   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:44.163003   15929 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:46:44.163025   15929 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-386094 NodeName:addons-386094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:46:44.163178   15929 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-386094"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:46:44.163230   15929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:46:44.170260   15929 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:46:44.170305   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:46:44.177200   15929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 23:46:44.188504   15929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:46:44.201639   15929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 23:46:44.212529   15929 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 23:46:44.215550   15929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:46:44.224305   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:44.297273   15929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:46:44.319790   15929 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094 for IP: 192.168.49.2
	I1121 23:46:44.319811   15929 certs.go:195] generating shared ca certs ...
	I1121 23:46:44.319827   15929 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.319944   15929 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1121 23:46:44.348846   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt ...
	I1121 23:46:44.348867   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt: {Name:mkea849deea592b6bfe00d3ded9d602ecb5c2ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.349001   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key ...
	I1121 23:46:44.349012   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key: {Name:mke9cb529f46b649a6be1ccb61fe02278e3a93d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.349094   15929 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1121 23:46:44.446982   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt ...
	I1121 23:46:44.447008   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt: {Name:mk10973c1cd72755f24858e36c099a37cb8141d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.447165   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key ...
	I1121 23:46:44.447177   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key: {Name:mk06eca966f5e85631f257972802afd34e2b6c55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.447245   15929 certs.go:257] generating profile certs ...
	I1121 23:46:44.447323   15929 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key
	I1121 23:46:44.447343   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt with IP's: []
	I1121 23:46:44.500003   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt ...
	I1121 23:46:44.500025   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: {Name:mk4db81f59767802317cd84b2c8d5697fd53f0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.500183   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key ...
	I1121 23:46:44.500196   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key: {Name:mk2ef87ba4917233426635eff4f07a22bcc4a4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.500268   15929 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28
	I1121 23:46:44.500287   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 23:46:44.643185   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 ...
	I1121 23:46:44.643210   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28: {Name:mkc2d45a6f019b81d14713bb8042d30c2d6c11cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.643359   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28 ...
	I1121 23:46:44.643373   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28: {Name:mk03bbf321a6e9ef5aae40d73128d4835b02eebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.643446   15929 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt
	I1121 23:46:44.643544   15929 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key
	I1121 23:46:44.643601   15929 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key
	I1121 23:46:44.643620   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt with IP's: []
	I1121 23:46:44.760579   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt ...
	I1121 23:46:44.760602   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt: {Name:mk106507d6c193afb25675870f56a1094f6b6311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.760746   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key ...
	I1121 23:46:44.760758   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key: {Name:mkac8987c15993b47810e86bf565985768b08430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.760920   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1121 23:46:44.760956   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:46:44.760981   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:46:44.761004   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1121 23:46:44.761643   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:46:44.778368   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 23:46:44.793741   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:46:44.808981   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 23:46:44.824472   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:46:44.839774   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:46:44.855082   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:46:44.870233   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:46:44.885239   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:46:44.902155   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:46:44.912847   15929 ssh_runner.go:195] Run: openssl version
	I1121 23:46:44.918206   15929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:46:44.927400   15929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.930504   15929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.930559   15929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.962792   15929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:46:44.969807   15929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:46:44.972737   15929 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:46:44.972775   15929 kubeadm.go:401] StartCluster: {Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:46:44.972842   15929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:46:44.972875   15929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:46:44.997233   15929 cri.go:89] found id: ""
	I1121 23:46:44.997291   15929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:46:45.004184   15929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:46:45.011224   15929 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 23:46:45.011260   15929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:46:45.017870   15929 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:46:45.017887   15929 kubeadm.go:158] found existing configuration files:
	
	I1121 23:46:45.017920   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:46:45.024537   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:46:45.024575   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:46:45.030826   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:46:45.037401   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:46:45.037451   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:46:45.043693   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:46:45.050213   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:46:45.050252   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:46:45.056468   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:46:45.062900   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:46:45.062947   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:46:45.069230   15929 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 23:46:45.102343   15929 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:46:45.102409   15929 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:46:45.132265   15929 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 23:46:45.132353   15929 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 23:46:45.132405   15929 kubeadm.go:319] OS: Linux
	I1121 23:46:45.132466   15929 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 23:46:45.132544   15929 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 23:46:45.132607   15929 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 23:46:45.132699   15929 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 23:46:45.132780   15929 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 23:46:45.132849   15929 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 23:46:45.132952   15929 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 23:46:45.133021   15929 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 23:46:45.182928   15929 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:46:45.183069   15929 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:46:45.183205   15929 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:46:45.190330   15929 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:46:45.192225   15929 out.go:252]   - Generating certificates and keys ...
	I1121 23:46:45.192296   15929 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:46:45.192375   15929 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:46:45.295275   15929 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:46:45.370011   15929 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:46:45.508369   15929 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:46:45.652707   15929 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:46:46.166082   15929 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:46:46.166251   15929 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-386094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:46:46.599889   15929 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:46:46.600023   15929 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-386094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:46:46.726393   15929 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:46:46.902957   15929 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:46:46.994307   15929 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:46:46.994404   15929 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:46:47.032304   15929 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:46:47.250238   15929 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:46:47.381814   15929 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:46:47.593080   15929 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:46:48.059588   15929 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:46:48.060027   15929 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:46:48.063525   15929 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:46:48.064795   15929 out.go:252]   - Booting up control plane ...
	I1121 23:46:48.064921   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:46:48.065021   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:46:48.066549   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:46:48.094696   15929 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:46:48.094834   15929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:46:48.100583   15929 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:46:48.100863   15929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:46:48.100922   15929 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:46:48.191838   15929 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:46:48.191973   15929 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:46:49.194089   15929 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001820793s
	I1121 23:46:49.197816   15929 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:46:49.197938   15929 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 23:46:49.198080   15929 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:46:49.198213   15929 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:46:50.442836   15929 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.244587389s
	I1121 23:46:50.444810   15929 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.247001708s
	I1121 23:46:52.199819   15929 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001791434s
	I1121 23:46:52.210782   15929 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:46:52.220431   15929 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:46:52.227371   15929 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:46:52.227597   15929 kubeadm.go:319] [mark-control-plane] Marking the node addons-386094 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:46:52.234188   15929 kubeadm.go:319] [bootstrap-token] Using token: 97huse.e9m5pfe7tq8jbjm1
	I1121 23:46:52.235480   15929 out.go:252]   - Configuring RBAC rules ...
	I1121 23:46:52.235631   15929 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:46:52.237950   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:46:52.242323   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:46:52.244538   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:46:52.246498   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:46:52.249179   15929 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:46:52.605362   15929 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:46:53.017982   15929 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:46:53.604874   15929 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:46:53.605773   15929 kubeadm.go:319] 
	I1121 23:46:53.605866   15929 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:46:53.605876   15929 kubeadm.go:319] 
	I1121 23:46:53.605970   15929 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:46:53.605985   15929 kubeadm.go:319] 
	I1121 23:46:53.606021   15929 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:46:53.606123   15929 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:46:53.606224   15929 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:46:53.606241   15929 kubeadm.go:319] 
	I1121 23:46:53.606304   15929 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:46:53.606313   15929 kubeadm.go:319] 
	I1121 23:46:53.606387   15929 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:46:53.606400   15929 kubeadm.go:319] 
	I1121 23:46:53.606484   15929 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:46:53.606588   15929 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:46:53.606666   15929 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:46:53.606679   15929 kubeadm.go:319] 
	I1121 23:46:53.606797   15929 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:46:53.606903   15929 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:46:53.606913   15929 kubeadm.go:319] 
	I1121 23:46:53.607014   15929 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 97huse.e9m5pfe7tq8jbjm1 \
	I1121 23:46:53.607146   15929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1121 23:46:53.607178   15929 kubeadm.go:319] 	--control-plane 
	I1121 23:46:53.607187   15929 kubeadm.go:319] 
	I1121 23:46:53.607281   15929 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:46:53.607289   15929 kubeadm.go:319] 
	I1121 23:46:53.607391   15929 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 97huse.e9m5pfe7tq8jbjm1 \
	I1121 23:46:53.607514   15929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1121 23:46:53.609346   15929 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 23:46:53.609463   15929 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:46:53.609493   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:53.609503   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:53.610941   15929 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 23:46:53.611994   15929 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 23:46:53.615945   15929 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 23:46:53.615960   15929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 23:46:53.628509   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 23:46:53.815999   15929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:46:53.816092   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:53.816139   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-386094 minikube.k8s.io/updated_at=2025_11_21T23_46_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-386094 minikube.k8s.io/primary=true
	I1121 23:46:53.883995   15929 ops.go:34] apiserver oom_adj: -16
	I1121 23:46:53.884013   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:54.384046   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:54.884158   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:55.384557   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:55.884239   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:56.384703   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:56.884350   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:57.384087   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:57.884305   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:58.384143   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:58.443296   15929 kubeadm.go:1114] duration metric: took 4.627258247s to wait for elevateKubeSystemPrivileges
	I1121 23:46:58.443346   15929 kubeadm.go:403] duration metric: took 13.47056512s to StartCluster
	I1121 23:46:58.443370   15929 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:58.443484   15929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:58.443880   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:58.444072   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:46:58.444101   15929 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:58.444172   15929 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:46:58.444298   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:58.444314   15929 addons.go:70] Setting cloud-spanner=true in profile "addons-386094"
	I1121 23:46:58.444333   15929 addons.go:239] Setting addon cloud-spanner=true in "addons-386094"
	I1121 23:46:58.444343   15929 addons.go:70] Setting registry=true in profile "addons-386094"
	I1121 23:46:58.444332   15929 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-386094"
	I1121 23:46:58.444353   15929 addons.go:70] Setting volcano=true in profile "addons-386094"
	I1121 23:46:58.444361   15929 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-386094"
	I1121 23:46:58.444363   15929 addons.go:239] Setting addon volcano=true in "addons-386094"
	I1121 23:46:58.444369   15929 addons.go:70] Setting volumesnapshots=true in profile "addons-386094"
	I1121 23:46:58.444371   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444379   15929 addons.go:239] Setting addon volumesnapshots=true in "addons-386094"
	I1121 23:46:58.444382   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444371   15929 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-386094"
	I1121 23:46:58.444410   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444411   15929 addons.go:70] Setting inspektor-gadget=true in profile "addons-386094"
	I1121 23:46:58.444443   15929 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-386094"
	I1121 23:46:58.444474   15929 addons.go:239] Setting addon inspektor-gadget=true in "addons-386094"
	I1121 23:46:58.444485   15929 addons.go:70] Setting default-storageclass=true in profile "addons-386094"
	I1121 23:46:58.444493   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444307   15929 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-386094"
	I1121 23:46:58.444509   15929 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-386094"
	I1121 23:46:58.444514   15929 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-386094"
	I1121 23:46:58.444526   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444795   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444792   15929 addons.go:70] Setting metrics-server=true in profile "addons-386094"
	I1121 23:46:58.444802   15929 addons.go:70] Setting registry-creds=true in profile "addons-386094"
	I1121 23:46:58.444815   15929 addons.go:239] Setting addon registry-creds=true in "addons-386094"
	I1121 23:46:58.444823   15929 addons.go:239] Setting addon metrics-server=true in "addons-386094"
	I1121 23:46:58.444845   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444856   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444938   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444941   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444945   15929 addons.go:70] Setting ingress-dns=true in profile "addons-386094"
	I1121 23:46:58.444954   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444959   15929 addons.go:239] Setting addon ingress-dns=true in "addons-386094"
	I1121 23:46:58.444985   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.445251   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.445464   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444795   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446274   15929 addons.go:70] Setting gcp-auth=true in profile "addons-386094"
	I1121 23:46:58.446432   15929 mustload.go:66] Loading cluster: addons-386094
	I1121 23:46:58.444404   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444299   15929 addons.go:70] Setting ingress=true in profile "addons-386094"
	I1121 23:46:58.444938   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444336   15929 addons.go:70] Setting yakd=true in profile "addons-386094"
	I1121 23:46:58.446285   15929 addons.go:70] Setting storage-provisioner=true in profile "addons-386094"
	I1121 23:46:58.446460   15929 addons.go:239] Setting addon storage-provisioner=true in "addons-386094"
	I1121 23:46:58.446485   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.445472   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446684   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446932   15929 addons.go:239] Setting addon ingress=true in "addons-386094"
	I1121 23:46:58.447323   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444361   15929 addons.go:239] Setting addon registry=true in "addons-386094"
	I1121 23:46:58.447451   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.446946   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446355   15929 out.go:179] * Verifying Kubernetes components...
	I1121 23:46:58.447103   15929 addons.go:239] Setting addon yakd=true in "addons-386094"
	I1121 23:46:58.448074   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.446401   15929 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-386094"
	I1121 23:46:58.448439   15929 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-386094"
	I1121 23:46:58.448465   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.448933   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.451309   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:58.451952   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:58.452283   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.453942   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.452612   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.456430   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.457063   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	W1121 23:46:58.502004   15929 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:46:58.523888   15929 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:46:58.525013   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:46:58.525108   15929 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:46:58.525529   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:46:58.525597   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.525202   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:46:58.526887   15929 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:46:58.528094   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:46:58.528517   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:46:58.528808   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:46:58.528987   15929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:46:58.529448   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.531883   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:46:58.532235   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:46:58.534585   15929 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:46:58.534602   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:46:58.534630   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:46:58.536098   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:46:58.536181   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.536780   15929 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-386094"
	I1121 23:46:58.536829   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.537289   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.537549   15929 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:46:58.538212   15929 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:46:58.538840   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:46:58.538906   15929 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:46:58.539158   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 23:46:58.539209   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.539505   15929 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:46:58.539517   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:46:58.539559   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.543263   15929 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:46:58.543364   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:46:58.544447   15929 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:46:58.544463   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:46:58.544508   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.546095   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:46:58.546962   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:46:58.546979   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:46:58.547033   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.561409   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:46:58.561502   15929 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:46:58.563155   15929 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:46:58.563178   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:46:58.563242   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.566681   15929 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:46:58.566761   15929 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:46:58.568252   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:46:58.568275   15929 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:46:58.568354   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.568908   15929 addons.go:239] Setting addon default-storageclass=true in "addons-386094"
	I1121 23:46:58.570673   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.571183   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.570083   15929 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:46:58.570160   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:46:58.571608   15929 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:46:58.571678   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.572604   15929 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:46:58.572753   15929 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:46:58.572765   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:46:58.572823   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.581648   15929 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:46:58.581765   15929 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:46:58.581776   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:46:58.581836   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.583984   15929 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:46:58.584004   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:46:58.584068   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.587136   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.587159   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.598449   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.599323   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.602964   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.621153   15929 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:46:58.622064   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.622121   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.626715   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.627492   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.629102   15929 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:46:58.630229   15929 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:46:58.630392   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:46:58.630570   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.642358   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:46:58.643869   15929 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:46:58.643901   15929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:46:58.644041   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.644073   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.653096   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.654086   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.654156   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.655207   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	W1121 23:46:58.660914   15929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:46:58.660964   15929 retry.go:31] will retry after 256.482979ms: ssh: handshake failed: EOF
	W1121 23:46:58.662139   15929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:46:58.662192   15929 retry.go:31] will retry after 191.458542ms: ssh: handshake failed: EOF
	I1121 23:46:58.671531   15929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:46:58.671778   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.689953   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.762809   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:46:58.762839   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:46:58.765257   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:46:58.773684   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:46:58.774413   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:46:58.774432   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:46:58.783583   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:46:58.783611   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:46:58.794330   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:46:58.797122   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:46:58.800341   15929 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:46:58.800361   15929 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:46:58.806079   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:46:58.806099   15929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:46:58.806193   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:46:58.817313   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:46:58.821002   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:46:58.821025   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:46:58.823548   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:46:58.823571   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:46:58.830679   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:46:58.830698   15929 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:46:58.831863   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:46:58.845846   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:46:58.845869   15929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:46:58.846019   15929 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:46:58.846028   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:46:58.857371   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:46:58.857391   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:46:58.861419   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:46:58.865549   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:46:58.865582   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:46:58.874385   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:46:58.874405   15929 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:46:58.883886   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:46:58.893503   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:46:58.904641   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:46:58.904676   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:46:58.922102   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:46:58.922150   15929 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:46:58.924602   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:46:58.924625   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:46:58.945444   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:46:58.945476   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:46:58.960280   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:46:58.960323   15929 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:46:58.975207   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:46:58.975232   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:46:59.010143   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:46:59.010191   15929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:46:59.020912   15929 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 23:46:59.021842   15929 node_ready.go:35] waiting up to 6m0s for node "addons-386094" to be "Ready" ...
	I1121 23:46:59.022331   15929 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:46:59.022346   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:46:59.032523   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:46:59.054529   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:46:59.054842   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:46:59.054859   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:46:59.101439   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:46:59.101466   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:46:59.101632   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:46:59.111496   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:46:59.167602   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:46:59.167627   15929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:46:59.244662   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:46:59.530246   15929 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-386094" context rescaled to 1 replicas
	I1121 23:46:59.775150   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.009853516s)
	I1121 23:46:59.775233   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.001522528s)
	I1121 23:46:59.954152   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.147915951s)
	I1121 23:46:59.954188   15929 addons.go:495] Verifying addon ingress=true in "addons-386094"
	I1121 23:46:59.954186   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.136843097s)
	I1121 23:46:59.954272   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.122381541s)
	I1121 23:46:59.954302   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.092865741s)
	I1121 23:46:59.954498   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060959987s)
	I1121 23:46:59.954519   15929 addons.go:495] Verifying addon metrics-server=true in "addons-386094"
	I1121 23:46:59.954357   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.070441742s)
	I1121 23:46:59.954554   15929 addons.go:495] Verifying addon registry=true in "addons-386094"
	I1121 23:46:59.955788   15929 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-386094 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:46:59.955798   15929 out.go:179] * Verifying ingress addon...
	I1121 23:46:59.955826   15929 out.go:179] * Verifying registry addon...
	I1121 23:46:59.957996   15929 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:46:59.957998   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1121 23:46:59.960784   15929 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1121 23:46:59.961150   15929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:46:59.961168   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:46:59.961516   15929 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:46:59.961536   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:00.371483   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.259935961s)
	W1121 23:47:00.371541   15929 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:00.371571   15929 retry.go:31] will retry after 289.175888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:00.371726   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.127019443s)
	I1121 23:47:00.371750   15929 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-386094"
	I1121 23:47:00.374135   15929 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:47:00.376184   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:47:00.378809   15929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:00.378831   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:00.478846   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:00.478999   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:00.661278   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:00.878893   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:00.960837   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:00.960841   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:01.024485   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:01.379602   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:01.480301   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:01.480378   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:01.878634   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:01.960658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:01.960758   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:02.379620   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:02.480501   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:02.480622   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:02.885347   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:02.960368   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:02.960481   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:03.081892   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.420572351s)
	I1121 23:47:03.379590   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:03.480548   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:03.480834   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:03.524341   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:03.878731   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:03.960510   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:03.960680   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:04.379233   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:04.479498   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:04.479705   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:04.878727   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:04.961086   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:04.961260   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:05.379272   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:05.479932   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:05.480254   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:05.879422   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:05.960181   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:05.960382   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:06.025286   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:06.198942   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:47:06.199010   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:47:06.216036   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:47:06.309156   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:47:06.320548   15929 addons.go:239] Setting addon gcp-auth=true in "addons-386094"
	I1121 23:47:06.320596   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:47:06.320932   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:47:06.338034   15929 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:47:06.338100   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:47:06.354067   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:47:06.379014   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:06.439892   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:06.440993   15929 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:47:06.441975   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:47:06.441990   15929 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:47:06.453765   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:47:06.453784   15929 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:47:06.461154   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:06.461158   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:06.465652   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:06.465668   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:47:06.477261   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:06.754577   15929 addons.go:495] Verifying addon gcp-auth=true in "addons-386094"
	I1121 23:47:06.755753   15929 out.go:179] * Verifying gcp-auth addon...
	I1121 23:47:06.757521   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:47:06.759614   15929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:47:06.759628   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:06.878590   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:06.960504   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:06.960652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.259683   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:07.378748   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:07.460868   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:07.460929   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.759879   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:07.879156   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:07.959923   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.959921   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.259868   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:08.379187   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:08.460007   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.460227   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:08.523946   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:08.760248   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:08.878385   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:08.960332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.960500   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:09.260347   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:09.378111   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:09.459911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:09.460100   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:09.760034   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:09.879303   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:09.960197   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:09.960351   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:10.260234   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:10.378025   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:10.460956   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:10.461138   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:10.760203   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:10.879320   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:10.960100   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:10.960425   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:11.024075   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:11.260099   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:11.379096   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:11.459911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:11.459987   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:11.759977   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:11.879231   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:11.979739   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:11.979853   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:12.259794   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:12.378843   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:12.460947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:12.461134   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:12.760288   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:12.878484   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:12.960693   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:12.960747   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:13.024616   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:13.259783   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:13.378973   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:13.460794   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:13.460883   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:13.759917   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:13.879209   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:13.960332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:13.960402   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:14.260577   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:14.378840   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:14.460955   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:14.461036   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:14.760011   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:14.879042   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:14.961038   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:14.961199   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:15.260357   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:15.378408   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:15.461115   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:15.461221   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:15.523769   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:15.760085   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:15.879495   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:15.960240   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:15.960456   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:16.260436   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:16.378496   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:16.460497   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:16.460542   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:16.759589   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:16.878652   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:16.960643   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:16.960696   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:17.260358   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:17.378089   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:17.459936   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:17.460074   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:17.523794   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:17.759924   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:17.879156   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:17.960104   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:17.960233   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:18.259905   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:18.379066   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:18.460203   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:18.460404   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:18.760520   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:18.878422   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:18.960451   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:18.960671   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:19.259516   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:19.378406   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:19.460179   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:19.460340   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:19.523954   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:19.760212   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:19.879360   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:19.960291   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:19.960401   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:20.260288   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:20.378169   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:20.460005   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:20.460063   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:20.760000   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:20.879043   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:20.961135   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:20.961371   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:21.260191   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:21.378985   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:21.460898   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:21.461028   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:21.759636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:21.878955   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:21.979389   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:21.979437   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:22.023988   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:22.260524   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:22.378503   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:22.460588   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:22.460756   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:22.759513   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:22.878668   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:22.960900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:22.960902   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:23.259789   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:23.378911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:23.461158   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:23.461224   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:23.759822   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:23.879050   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:23.960873   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:23.961048   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:24.259909   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:24.379027   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:24.461520   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:24.461667   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:24.524502   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:24.759756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:24.878800   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:24.960807   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:24.960945   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:25.260030   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:25.378943   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:25.460732   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:25.460966   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:25.759623   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:25.878658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:25.960561   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:25.960688   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:26.259476   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:26.378636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:26.460439   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:26.460624   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:26.524569   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:26.759749   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:26.878923   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:26.960757   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:26.960918   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:27.259912   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:27.378917   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:27.460939   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:27.461170   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:27.759763   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:27.878841   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:27.960939   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:27.961120   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:28.259984   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:28.379178   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:28.460032   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:28.460102   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:28.760446   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:28.878635   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:28.960586   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:28.960785   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:29.024466   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:29.259585   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:29.378631   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:29.460568   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:29.460652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:29.760349   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:29.878273   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:29.960234   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:29.960504   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:30.260145   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:30.379023   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:30.460820   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:30.460943   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:30.759900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:30.879079   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:30.960077   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:30.960268   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:31.260125   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:31.379323   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:31.460364   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:31.460412   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:31.524180   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:31.760943   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:31.879091   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:31.961157   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:31.961289   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:32.260290   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:32.378137   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:32.459936   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:32.460111   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:32.759946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:32.878883   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:32.960786   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:32.960923   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:33.259696   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:33.378965   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:33.461023   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:33.461166   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:33.759963   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:33.879272   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:33.960226   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:33.960438   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:34.023881   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:34.260196   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:34.379320   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:34.460126   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:34.460340   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:34.760375   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:34.878222   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:34.960138   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:34.960287   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:35.260333   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:35.378236   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:35.460453   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:35.460662   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:35.760645   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:35.878538   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:35.960521   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:35.960648   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:36.024254   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:36.259901   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:36.378675   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:36.460584   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:36.460765   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:36.759516   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:36.878529   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:36.960591   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:36.960647   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:37.259441   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:37.378332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:37.460218   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:37.460344   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:37.760346   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:37.878392   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:37.960604   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:37.960792   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:38.024533   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:38.259672   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:38.378915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:38.461050   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:38.461280   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:38.760048   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:38.879142   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:38.960166   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:38.960300   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:39.260276   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:39.378314   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:39.460260   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:39.460417   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:39.760709   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:39.878801   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:39.961033   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:39.961061   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:40.030795   15929 node_ready.go:49] node "addons-386094" is "Ready"
	I1121 23:47:40.030829   15929 node_ready.go:38] duration metric: took 41.008959779s for node "addons-386094" to be "Ready" ...
	I1121 23:47:40.030849   15929 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:47:40.030987   15929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:47:40.050924   15929 api_server.go:72] duration metric: took 41.606788143s to wait for apiserver process to appear ...
	I1121 23:47:40.050953   15929 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:47:40.050976   15929 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 23:47:40.057780   15929 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 23:47:40.059028   15929 api_server.go:141] control plane version: v1.34.1
	I1121 23:47:40.059074   15929 api_server.go:131] duration metric: took 8.092867ms to wait for apiserver health ...
	I1121 23:47:40.059086   15929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:47:40.063746   15929 system_pods.go:59] 19 kube-system pods found
	I1121 23:47:40.063784   15929 system_pods.go:61] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.063793   15929 system_pods.go:61] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending
	I1121 23:47:40.063801   15929 system_pods.go:61] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending
	I1121 23:47:40.063807   15929 system_pods.go:61] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending
	I1121 23:47:40.063811   15929 system_pods.go:61] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.063816   15929 system_pods.go:61] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.063821   15929 system_pods.go:61] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.063827   15929 system_pods.go:61] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.063837   15929 system_pods.go:61] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending
	I1121 23:47:40.063842   15929 system_pods.go:61] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.063849   15929 system_pods.go:61] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.063854   15929 system_pods.go:61] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending
	I1121 23:47:40.063863   15929 system_pods.go:61] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending
	I1121 23:47:40.063868   15929 system_pods.go:61] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending
	I1121 23:47:40.063889   15929 system_pods.go:61] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending
	I1121 23:47:40.063894   15929 system_pods.go:61] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending
	I1121 23:47:40.063898   15929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending
	I1121 23:47:40.063903   15929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending
	I1121 23:47:40.063908   15929 system_pods.go:61] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending
	I1121 23:47:40.063915   15929 system_pods.go:74] duration metric: took 4.821487ms to wait for pod list to return data ...
	I1121 23:47:40.063927   15929 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:47:40.067093   15929 default_sa.go:45] found service account: "default"
	I1121 23:47:40.067114   15929 default_sa.go:55] duration metric: took 3.176618ms for default service account to be created ...
	I1121 23:47:40.067123   15929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:47:40.071166   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.071192   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending
	I1121 23:47:40.071204   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.071210   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending
	I1121 23:47:40.071217   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending
	I1121 23:47:40.071221   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending
	I1121 23:47:40.071225   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.071231   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.071243   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.071249   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.071258   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending
	I1121 23:47:40.071262   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.071268   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.071273   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending
	I1121 23:47:40.071278   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending
	I1121 23:47:40.071283   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending
	I1121 23:47:40.071291   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending
	I1121 23:47:40.071295   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending
	I1121 23:47:40.071304   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending
	I1121 23:47:40.071308   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending
	I1121 23:47:40.071313   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending
	I1121 23:47:40.071334   15929 retry.go:31] will retry after 269.09938ms: missing components: kube-dns
	I1121 23:47:40.259904   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:40.362843   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.362886   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:40.362898   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.362909   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:40.362917   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:40.362925   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:40.362930   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.362938   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.362944   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.362951   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.362964   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:40.362970   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.362977   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.362984   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:40.362993   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:40.363001   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:40.363011   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:40.363019   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:40.363030   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.363039   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.363048   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:40.363080   15929 retry.go:31] will retry after 366.151557ms: missing components: kube-dns
	I1121 23:47:40.460820   15929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:40.460843   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:40.462957   15929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:47:40.462980   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:40.463594   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:40.733225   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.733254   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:40.733261   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.733269   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:40.733276   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:40.733282   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:40.733285   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.733290   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.733293   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.733297   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.733303   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:40.733309   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.733312   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.733317   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:40.733323   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:40.733328   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:40.733334   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:40.733339   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:40.733346   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.733352   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.733358   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:40.733374   15929 retry.go:31] will retry after 445.528563ms: missing components: kube-dns
	I1121 23:47:40.759396   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:40.878544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:40.960909   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:40.961001   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.183389   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:41.183424   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:41.183431   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Running
	I1121 23:47:41.183439   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:41.183446   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:41.183460   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:41.183471   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:41.183478   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:41.183485   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:41.183491   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:41.183501   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.183506   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:41.183512   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:41.183527   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.183534   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.183540   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.183545   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.183552   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.183564   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:41.183573   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:41.183579   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Running
	I1121 23:47:41.183590   15929 system_pods.go:126] duration metric: took 1.116460563s to wait for k8s-apps to be running ...
	I1121 23:47:41.183603   15929 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:47:41.183657   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:47:41.199631   15929 system_svc.go:56] duration metric: took 16.021076ms WaitForService to wait for kubelet
	I1121 23:47:41.199663   15929 kubeadm.go:587] duration metric: took 42.755532163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:41.199683   15929 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:47:41.202261   15929 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 23:47:41.202290   15929 node_conditions.go:123] node cpu capacity is 8
	I1121 23:47:41.202310   15929 node_conditions.go:105] duration metric: took 2.621029ms to run NodePressure ...
	I1121 23:47:41.202324   15929 start.go:242] waiting for startup goroutines ...
	I1121 23:47:41.260409   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:41.379174   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:41.461432   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.461523   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.759738   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:41.878840   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:41.961088   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.961373   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.262131   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:42.380880   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.462965   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.463968   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.761191   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:42.880321   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.960915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.960972   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.260461   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:43.379673   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.461296   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.461346   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.760838   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:43.879916   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.980262   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.980272   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.261453   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.379418   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.460688   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.460782   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.760788   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.879730   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.961129   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.961293   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.261162   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.379618   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.461108   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.461227   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.761666   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.879816   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.961693   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.961862   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.261130   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.380839   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.461415   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.461493   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.760544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.879624   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.961294   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.961338   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.261331   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.379504   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.461394   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.461468   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.760002   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.895537   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.996393   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.996483   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.261122   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.379446   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.461291   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.461360   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.761692   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.879636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.961268   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.961328   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.261340   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.379825   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.461495   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.461550   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.759928   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.879350   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.960957   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.960947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.260981   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.379928   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.461362   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.461417   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.760756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.880290   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.960905   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.960927   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.260736   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.380399   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.461484   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.461512   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.760380   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.879338   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.979597   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.979805   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.260525   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.378832   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.460930   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.461137   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.761371   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.879462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.961235   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.961379   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.260828   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.463866   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.463889   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.464003   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.760388   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.878817   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.961241   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.961421   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.260757   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.379665   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.461711   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.461751   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.760007   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.879937   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.963243   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.967283   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.260353   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.380685   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.480749   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.480822   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:55.759953   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.879711   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.960766   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.960854   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.260875   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.380022   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.461803   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.461911   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.761940   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.880019   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.960903   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.961044   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.260492   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.379172   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.461946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.461984   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.760218   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.878900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.961253   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.961362   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.261182   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.379418   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.460237   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.460266   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.761233   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.879585   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.961207   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.961273   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.261455   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.379087   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.461155   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.461295   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.760587   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.878947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.961010   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.961065   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.260712   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.379571   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.461300   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.461541   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.761804   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.879663   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.961462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.961514   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.260484   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.378971   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.461390   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.461422   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.759946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.879454   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.961013   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.961027   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.262195   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.379369   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.460485   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.460491   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.760937   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.879973   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.961658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.961706   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.259929   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.379323   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.460364   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.460578   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.759947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.881756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.960783   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.960943   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.260674   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.379310   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.461150   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.461239   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.761020   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.879752   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.961193   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.961196   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.260923   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.379949   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.461629   15929 kapi.go:107] duration metric: took 1m5.503626359s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:48:05.461652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.760188   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.879680   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.961012   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.261743   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.380093   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.462612   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.760430   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.879683   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.961362   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.261200   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.430755   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.460536   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.760244   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.879915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.961542   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.259753   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.381230   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.460896   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.761159   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.879844   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.960596   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.260405   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.378956   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.479848   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.760544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.881081   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.961825   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.260356   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.380867   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.460758   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.760462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.879472   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.960952   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.260776   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.379538   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.460437   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.759731   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.878949   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.961196   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.260656   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.379133   15929 kapi.go:107] duration metric: took 1m12.00295056s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:48:12.462157   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.761820   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.961919   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.259977   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.461118   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.760764   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.965275   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.261134   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.461310   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.760846   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.961709   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.260930   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.461902   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.760528   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.961103   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.262280   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.460866   15929 kapi.go:107] duration metric: took 1m16.502863434s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:48:16.760813   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.261025   15929 kapi.go:107] duration metric: took 1m10.503503111s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:48:17.262207   15929 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-386094 cluster.
	I1121 23:48:17.263247   15929 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:48:17.264248   15929 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:48:17.265365   15929 out.go:179] * Enabled addons: inspektor-gadget, nvidia-device-plugin, registry-creds, cloud-spanner, amd-gpu-device-plugin, metrics-server, storage-provisioner, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1121 23:48:17.266489   15929 addons.go:530] duration metric: took 1m18.822318032s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin registry-creds cloud-spanner amd-gpu-device-plugin metrics-server storage-provisioner ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1121 23:48:17.266531   15929 start.go:247] waiting for cluster config update ...
	I1121 23:48:17.266560   15929 start.go:256] writing updated cluster config ...
	I1121 23:48:17.266789   15929 ssh_runner.go:195] Run: rm -f paused
	I1121 23:48:17.270366   15929 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:17.272959   15929 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jdqrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.276478   15929 pod_ready.go:94] pod "coredns-66bc5c9577-jdqrr" is "Ready"
	I1121 23:48:17.276500   15929 pod_ready.go:86] duration metric: took 3.522986ms for pod "coredns-66bc5c9577-jdqrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.278128   15929 pod_ready.go:83] waiting for pod "etcd-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.281226   15929 pod_ready.go:94] pod "etcd-addons-386094" is "Ready"
	I1121 23:48:17.281244   15929 pod_ready.go:86] duration metric: took 3.096965ms for pod "etcd-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.282692   15929 pod_ready.go:83] waiting for pod "kube-apiserver-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.285808   15929 pod_ready.go:94] pod "kube-apiserver-addons-386094" is "Ready"
	I1121 23:48:17.285826   15929 pod_ready.go:86] duration metric: took 3.118387ms for pod "kube-apiserver-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.287219   15929 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.673662   15929 pod_ready.go:94] pod "kube-controller-manager-addons-386094" is "Ready"
	I1121 23:48:17.673691   15929 pod_ready.go:86] duration metric: took 386.45492ms for pod "kube-controller-manager-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.873412   15929 pod_ready.go:83] waiting for pod "kube-proxy-bqrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.301594   15929 pod_ready.go:94] pod "kube-proxy-bqrb5" is "Ready"
	I1121 23:48:18.301625   15929 pod_ready.go:86] duration metric: took 428.1844ms for pod "kube-proxy-bqrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.562111   15929 pod_ready.go:83] waiting for pod "kube-scheduler-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.873927   15929 pod_ready.go:94] pod "kube-scheduler-addons-386094" is "Ready"
	I1121 23:48:18.873953   15929 pod_ready.go:86] duration metric: took 311.814985ms for pod "kube-scheduler-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.873967   15929 pod_ready.go:40] duration metric: took 1.603576966s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:18.916879   15929 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 23:48:18.918309   15929 out.go:179] * Done! kubectl is now configured to use "addons-386094" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.420410399Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-jjtzn/POD" id=500b2b84-b7db-4d06-adc9-a737016b946f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.420498616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.42789439Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jjtzn Namespace:default ID:a7f87d147b96926148c39992f9afa8177592e447873f94f1b56c5b7fa13259ba UID:ef9513a7-e937-426c-a701-99c2e327b6e5 NetNS:/var/run/netns/58c9a079-0e8a-4db0-b555-a98f55710926 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001335a0}] Aliases:map[]}"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.42791949Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-jjtzn to CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.437644116Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-jjtzn Namespace:default ID:a7f87d147b96926148c39992f9afa8177592e447873f94f1b56c5b7fa13259ba UID:ef9513a7-e937-426c-a701-99c2e327b6e5 NetNS:/var/run/netns/58c9a079-0e8a-4db0-b555-a98f55710926 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001335a0}] Aliases:map[]}"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.437802938Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-jjtzn for CNI network kindnet (type=ptp)"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.438893961Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.440091399Z" level=info msg="Ran pod sandbox a7f87d147b96926148c39992f9afa8177592e447873f94f1b56c5b7fa13259ba with infra container: default/hello-world-app-5d498dc89-jjtzn/POD" id=500b2b84-b7db-4d06-adc9-a737016b946f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.441287897Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9206b0cd-826d-48f0-95f5-ea171916783a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.441423537Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9206b0cd-826d-48f0-95f5-ea171916783a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.441479674Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9206b0cd-826d-48f0-95f5-ea171916783a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.442016508Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=b49891ac-3ff0-41b8-b89f-eaffc4634909 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:51:04 addons-386094 crio[773]: time="2025-11-21T23:51:04.449367138Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.23229619Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=b49891ac-3ff0-41b8-b89f-eaffc4634909 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.23278744Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3429280b-7ebc-40c2-8b97-38fc6c899323 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.234304926Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=0cf3b1a7-4939-4db6-a714-9dd7e15ef59a name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.238464918Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-jjtzn/hello-world-app" id=79b6fbcb-56ac-4fef-b8fb-796e1a4f159d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.238583878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.244164529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.244361938Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/62c2aa4192ab0f1434ff147761d13519a829a5684fa05247e2d3252403042b9a/merged/etc/passwd: no such file or directory"
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.244391748Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/62c2aa4192ab0f1434ff147761d13519a829a5684fa05247e2d3252403042b9a/merged/etc/group: no such file or directory"
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.244635569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.277252786Z" level=info msg="Created container c9381a3414f4ebcb9266659b4c4e32345667ad517d6b5dbc3c7d0739aebc5c8a: default/hello-world-app-5d498dc89-jjtzn/hello-world-app" id=79b6fbcb-56ac-4fef-b8fb-796e1a4f159d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.277760062Z" level=info msg="Starting container: c9381a3414f4ebcb9266659b4c4e32345667ad517d6b5dbc3c7d0739aebc5c8a" id=42a7948b-eb39-490c-8b6e-bef258778b81 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:51:05 addons-386094 crio[773]: time="2025-11-21T23:51:05.279433857Z" level=info msg="Started container" PID=9799 containerID=c9381a3414f4ebcb9266659b4c4e32345667ad517d6b5dbc3c7d0739aebc5c8a description=default/hello-world-app-5d498dc89-jjtzn/hello-world-app id=42a7948b-eb39-490c-8b6e-bef258778b81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a7f87d147b96926148c39992f9afa8177592e447873f94f1b56c5b7fa13259ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	c9381a3414f4e       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   a7f87d147b969       hello-world-app-5d498dc89-jjtzn            default
	dff6365d0be30       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   2d5bdb85bf170       registry-creds-764b6fb674-hvw4s            kube-system
	a5a60f6dc612b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   5ea25a024e6ab       nginx                                      default
	ad7d4826038c1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   aa66f53b81b35       busybox                                    default
	f0902a7fbcf03       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   33d9f6d328500       gcp-auth-78565c9fb4-rld7n                  gcp-auth
	4c40091f92b42       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   b9426c32aee24       ingress-nginx-controller-6c8bf45fb-bm7tc   ingress-nginx
	075e2ddbfd1b3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	7dcc52ab64881       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	28017a975316b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	af7d23a1702df       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	c18a6d90e25b7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   635a014a6c4e5       gadget-pjh9l                               gadget
	eeac962e2c5b5       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             2 minutes ago            Exited              patch                                    2                   32542f52a6f17       ingress-nginx-admission-patch-tztpx        ingress-nginx
	b0ebc6a2643ce       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	08636839ef014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   6de26d98e1adc       registry-proxy-7jwr9                       kube-system
	daba4d9b267e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   2140b22f1628f       ingress-nginx-admission-create-8z425       ingress-nginx
	2cad693d643e2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   1eb615ffd7084       amd-gpu-device-plugin-rjdxd                kube-system
	f1ffe717c9acc       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   623eb21922668       nvidia-device-plugin-daemonset-mqmzt       kube-system
	2d8c2a76b689b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	fdcdf133e27bc       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   d82e94d7a353d       snapshot-controller-7d9fbc56b8-wknk9       kube-system
	1d39e28b86df0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   b7140e30c78e7       yakd-dashboard-5ff678cb9-qqz4m             yakd-dashboard
	dafa66d52ee1e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   7ee01580ffe38       csi-hostpath-attacher-0                    kube-system
	4188e634536cb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   942f98da9ed8b       csi-hostpath-resizer-0                     kube-system
	9b71739f67786       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   fe6f9c5d7083c       snapshot-controller-7d9fbc56b8-mfq9f       kube-system
	d5d12cca9e0c9       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   11e3e9ce8d611       registry-6b586f9694-sgqmn                  kube-system
	d0c5a0bacbbac       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   8a1f09fe5a9a8       kube-ingress-dns-minikube                  kube-system
	9ad0bc2610d4a       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   91edf4cb58d2b       metrics-server-85b7d694d7-jj26h            kube-system
	3c127b7d8aab5       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   961289b1b8601       cloud-spanner-emulator-6f9fcf858b-wrw5n    default
	e41b861c11004       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   76e2acb10c471       local-path-provisioner-648f6765c9-l5l4d    local-path-storage
	a72f974d29159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   4cf33b190a059       storage-provisioner                        kube-system
	e22fb0f003be5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   9175c2043e99b       coredns-66bc5c9577-jdqrr                   kube-system
	e2527b33e3e0a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   6501523b079a1       kindnet-nhwtc                              kube-system
	8f7137d6b0740       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   92f5a912e6393       kube-proxy-bqrb5                           kube-system
	343a757d13fed       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   ae4757b3cb885       kube-scheduler-addons-386094               kube-system
	07338917ef004       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   a31bf4d5d7491       kube-controller-manager-addons-386094      kube-system
	b69d049422e96       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   f3bbc4a677472       kube-apiserver-addons-386094               kube-system
	d75ed4c5a71d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   bdc86dbfd85b0       etcd-addons-386094                         kube-system
	
	
	==> coredns [e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9] <==
	[INFO] 10.244.0.22:33709 - 31346 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004984531s
	[INFO] 10.244.0.22:59721 - 23945 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005594945s
	[INFO] 10.244.0.22:51178 - 21357 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00583659s
	[INFO] 10.244.0.22:39969 - 24978 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005104809s
	[INFO] 10.244.0.22:60414 - 32222 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005827926s
	[INFO] 10.244.0.22:35380 - 34860 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000881904s
	[INFO] 10.244.0.22:45589 - 15145 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001007106s
	[INFO] 10.244.0.25:53184 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000212325s
	[INFO] 10.244.0.25:51429 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174015s
	[INFO] 10.244.0.31:41482 - 25789 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000169697s
	[INFO] 10.244.0.31:43137 - 54055 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000186675s
	[INFO] 10.244.0.31:40349 - 5654 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000097826s
	[INFO] 10.244.0.31:34186 - 45957 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000133991s
	[INFO] 10.244.0.31:39715 - 16974 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000100589s
	[INFO] 10.244.0.31:41251 - 28909 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000157949s
	[INFO] 10.244.0.31:46419 - 51666 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002702697s
	[INFO] 10.244.0.31:59835 - 16322 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002946317s
	[INFO] 10.244.0.31:49047 - 25721 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.00372026s
	[INFO] 10.244.0.31:54085 - 39517 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004969038s
	[INFO] 10.244.0.31:56315 - 3551 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004314855s
	[INFO] 10.244.0.31:33359 - 8723 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004565285s
	[INFO] 10.244.0.31:56932 - 42352 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.003604006s
	[INFO] 10.244.0.31:55094 - 33141 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004402869s
	[INFO] 10.244.0.31:33648 - 26284 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001481687s
	[INFO] 10.244.0.31:44324 - 31926 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002855706s
	
	
	==> describe nodes <==
	Name:               addons-386094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-386094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-386094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_46_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-386094
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-386094"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:46:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-386094
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:50:27 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:50:27 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:50:27 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:50:27 +0000   Fri, 21 Nov 2025 23:47:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-386094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                3e3b5b98-949e-4931-ada6-ea20a7cfd370
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m46s
	  default                     cloud-spanner-emulator-6f9fcf858b-wrw5n     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  default                     hello-world-app-5d498dc89-jjtzn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-pjh9l                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  gcp-auth                    gcp-auth-78565c9fb4-rld7n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-bm7tc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m6s
	  kube-system                 amd-gpu-device-plugin-rjdxd                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 coredns-66bc5c9577-jdqrr                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m7s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 csi-hostpathplugin-bw962                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 etcd-addons-386094                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m13s
	  kube-system                 kindnet-nhwtc                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m8s
	  kube-system                 kube-apiserver-addons-386094                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-addons-386094       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-bqrb5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-addons-386094                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 metrics-server-85b7d694d7-jj26h             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m6s
	  kube-system                 nvidia-device-plugin-daemonset-mqmzt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 registry-6b586f9694-sgqmn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 registry-creds-764b6fb674-hvw4s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 registry-proxy-7jwr9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m25s
	  kube-system                 snapshot-controller-7d9fbc56b8-mfq9f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-wknk9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  local-path-storage          local-path-provisioner-648f6765c9-l5l4d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qqz4m              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m5s   kube-proxy       
	  Normal  Starting                 4m13s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s  kubelet          Node addons-386094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s  kubelet          Node addons-386094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s  kubelet          Node addons-386094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s   node-controller  Node addons-386094 event: Registered Node addons-386094 in Controller
	  Normal  NodeReady                3m26s  kubelet          Node addons-386094 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077] <==
	{"level":"warn","ts":"2025-11-21T23:46:49.866663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.875187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.881645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.888232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.894826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.901192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.907160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.912969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.920591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.927168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.934017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.939907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.945570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.969792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.975946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.981492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:50.028788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:00.847589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.480738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.490890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.497370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:18.560329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.335989ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:18.560429Z","caller":"traceutil/trace.go:172","msg":"trace[77590416] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1226; }","duration":"146.4537ms","start":"2025-11-21T23:48:18.413962Z","end":"2025-11-21T23:48:18.560416Z","steps":["trace[77590416] 'range keys from in-memory index tree'  (duration: 146.304225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T23:48:18.560769Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.713842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041477335362828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.187a2a77dded6000\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.187a2a77dded6000\" value_size:570 lease:8128041477335362116 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T23:48:18.560874Z","caller":"traceutil/trace.go:172","msg":"trace[126546247] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"177.886979ms","start":"2025-11-21T23:48:18.382966Z","end":"2025-11-21T23:48:18.560853Z","steps":["trace[126546247] 'process raft request'  (duration: 46.708129ms)","trace[126546247] 'compare'  (duration: 130.624811ms)"],"step_count":2}
	
	
	==> gcp-auth [f0902a7fbcf03ea8319a6579ea7fd6ded469a0b438c1636668a12bfa50e182bc] <==
	2025/11/21 23:48:16 GCP Auth Webhook started!
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	2025/11/21 23:48:35 Ready to marshal response ...
	2025/11/21 23:48:35 Ready to write response ...
	2025/11/21 23:48:35 Ready to marshal response ...
	2025/11/21 23:48:35 Ready to write response ...
	2025/11/21 23:48:37 Ready to marshal response ...
	2025/11/21 23:48:37 Ready to write response ...
	2025/11/21 23:48:38 Ready to marshal response ...
	2025/11/21 23:48:38 Ready to write response ...
	2025/11/21 23:48:44 Ready to marshal response ...
	2025/11/21 23:48:44 Ready to write response ...
	2025/11/21 23:48:47 Ready to marshal response ...
	2025/11/21 23:48:47 Ready to write response ...
	2025/11/21 23:49:18 Ready to marshal response ...
	2025/11/21 23:49:18 Ready to write response ...
	2025/11/21 23:51:04 Ready to marshal response ...
	2025/11/21 23:51:04 Ready to write response ...
	
	
	==> kernel <==
	 23:51:05 up 33 min,  0 user,  load average: 0.29, 0.55, 0.29
	Linux addons-386094 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5] <==
	I1121 23:48:59.791286       1 main.go:301] handling current node
	I1121 23:49:09.791965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:09.791995       1 main.go:301] handling current node
	I1121 23:49:19.791847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:19.791875       1 main.go:301] handling current node
	I1121 23:49:29.791482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:29.791513       1 main.go:301] handling current node
	I1121 23:49:39.798664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:39.798697       1 main.go:301] handling current node
	I1121 23:49:49.791979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:49.792005       1 main.go:301] handling current node
	I1121 23:49:59.792109       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:59.792134       1 main.go:301] handling current node
	I1121 23:50:09.799346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:09.799378       1 main.go:301] handling current node
	I1121 23:50:19.800456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:19.800482       1 main.go:301] handling current node
	I1121 23:50:29.792266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:29.792305       1 main.go:301] handling current node
	I1121 23:50:39.793823       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:39.793875       1 main.go:301] handling current node
	I1121 23:50:49.793348       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:49.793374       1 main.go:301] handling current node
	I1121 23:50:59.797955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:59.797986       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c] <==
	 > logger="UnhandledError"
	E1121 23:47:46.984163       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.72.114:443: connect: connection refused" logger="UnhandledError"
	E1121 23:47:46.986528       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.72.114:443: connect: connection refused" logger="UnhandledError"
	W1121 23:47:47.984313       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:47.984387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 23:47:47.984400       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:47:47.984325       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:47.984446       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 23:47:47.985602       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1121 23:47:50.486574       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1121 23:47:51.995711       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:51.995769       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:47:51.995796       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1121 23:48:26.557959       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54412: use of closed network connection
	E1121 23:48:26.693629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54448: use of closed network connection
	I1121 23:48:38.714860       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 23:48:38.893793       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.129.30"}
	I1121 23:48:56.753077       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1121 23:51:04.184507       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.30.179"}
	
	
	==> kube-controller-manager [07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687] <==
	I1121 23:46:57.453656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:46:57.453676       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:46:57.453658       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:46:57.454466       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 23:46:57.455999       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 23:46:57.456092       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 23:46:57.457246       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 23:46:57.462086       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 23:46:57.464922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:46:57.474645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:46:57.475743       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 23:46:57.475789       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 23:46:57.475819       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 23:46:57.475830       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 23:46:57.475836       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 23:46:57.481078       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-386094" podCIDRs=["10.244.0.0/24"]
	E1121 23:46:59.596960       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:47:27.468560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:47:27.468698       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:47:27.468728       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:47:27.480895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:47:27.484156       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:47:27.569001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:47:27.585279       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:47:42.461231       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73] <==
	I1121 23:46:59.545431       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:46:59.696530       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:46:59.796677       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:46:59.796708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:46:59.796790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:46:59.822032       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:46:59.822172       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:46:59.831607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:46:59.838565       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:46:59.838600       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:46:59.840213       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:46:59.840237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:46:59.840297       1 config.go:309] "Starting node config controller"
	I1121 23:46:59.840315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:46:59.840325       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:46:59.840341       1 config.go:200] "Starting service config controller"
	I1121 23:46:59.840355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:46:59.840372       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:46:59.840384       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:46:59.940519       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:46:59.940795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:46:59.940813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc] <==
	E1121 23:46:50.441587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:46:50.441584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:46:50.441706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:46:50.441752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:46:50.441852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:46:50.441948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:46:50.442011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:46:50.442033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:46:50.442258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:46:50.442510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:46:50.442555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:46:50.442594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:46:50.442968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:46:50.442999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:46:50.443633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:46:50.443674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:46:50.444161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:46:51.288023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:46:51.388810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:46:51.417124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:46:51.478639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:46:51.569947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:46:51.587727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:46:51.602651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 23:46:53.038590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.022605    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b1e92b57-c734-11f0-bf31-ce3033c3d9c9\") pod \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\" (UID: \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\") "
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.022642    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-gcp-creds\") pod \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\" (UID: \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\") "
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.022669    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgtlb\" (UniqueName: \"kubernetes.io/projected/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-kube-api-access-hgtlb\") pod \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\" (UID: \"0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e\") "
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.022731    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e" (UID: "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.022815    1275 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-gcp-creds\") on node \"addons-386094\" DevicePath \"\""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.024815    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-kube-api-access-hgtlb" (OuterVolumeSpecName: "kube-api-access-hgtlb") pod "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e" (UID: "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e"). InnerVolumeSpecName "kube-api-access-hgtlb". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.025457    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^b1e92b57-c734-11f0-bf31-ce3033c3d9c9" (OuterVolumeSpecName: "task-pv-storage") pod "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e" (UID: "0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e"). InnerVolumeSpecName "pvc-36572ebb-16ec-4576-855b-cc45fdfb8b5f". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.123873    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgtlb\" (UniqueName: \"kubernetes.io/projected/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e-kube-api-access-hgtlb\") on node \"addons-386094\" DevicePath \"\""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.123915    1275 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-36572ebb-16ec-4576-855b-cc45fdfb8b5f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b1e92b57-c734-11f0-bf31-ce3033c3d9c9\") on node \"addons-386094\" "
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.127736    1275 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-36572ebb-16ec-4576-855b-cc45fdfb8b5f" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^b1e92b57-c734-11f0-bf31-ce3033c3d9c9") on node "addons-386094"
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.224176    1275 reconciler_common.go:299] "Volume detached for volume \"pvc-36572ebb-16ec-4576-855b-cc45fdfb8b5f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^b1e92b57-c734-11f0-bf31-ce3033c3d9c9\") on node \"addons-386094\" DevicePath \"\""
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.410950    1275 scope.go:117] "RemoveContainer" containerID="3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a"
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.421111    1275 scope.go:117] "RemoveContainer" containerID="3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a"
	Nov 21 23:49:26 addons-386094 kubelet[1275]: E1121 23:49:26.421604    1275 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a\": container with ID starting with 3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a not found: ID does not exist" containerID="3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a"
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.421732    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a"} err="failed to get container status \"3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a\": rpc error: code = NotFound desc = could not find container \"3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a\": container with ID starting with 3bf116d674f39e539874b75ae70997af41b1f325c517a433cb139732244d894a not found: ID does not exist"
	Nov 21 23:49:26 addons-386094 kubelet[1275]: I1121 23:49:26.832033    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e" path="/var/lib/kubelet/pods/0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e/volumes"
	Nov 21 23:49:30 addons-386094 kubelet[1275]: I1121 23:49:30.829603    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rjdxd" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:49:43 addons-386094 kubelet[1275]: E1121 23:49:43.021227    1275 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-hvw4s" podUID="d8248262-bb58-4830-86c5-7e3da3404d7a"
	Nov 21 23:49:57 addons-386094 kubelet[1275]: I1121 23:49:57.530049    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-hvw4s" podStartSLOduration=177.328435295 podStartE2EDuration="2m58.530032204s" podCreationTimestamp="2025-11-21 23:46:59 +0000 UTC" firstStartedPulling="2025-11-21 23:49:55.84969988 +0000 UTC m=+183.096266137" lastFinishedPulling="2025-11-21 23:49:57.05129677 +0000 UTC m=+184.297863046" observedRunningTime="2025-11-21 23:49:57.529309286 +0000 UTC m=+184.775875545" watchObservedRunningTime="2025-11-21 23:49:57.530032204 +0000 UTC m=+184.776598463"
	Nov 21 23:50:14 addons-386094 kubelet[1275]: I1121 23:50:14.829278    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-mqmzt" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:50:22 addons-386094 kubelet[1275]: I1121 23:50:22.830125    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7jwr9" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:50:46 addons-386094 kubelet[1275]: I1121 23:50:46.829729    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-rjdxd" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:51:04 addons-386094 kubelet[1275]: I1121 23:51:04.179870    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpcg\" (UniqueName: \"kubernetes.io/projected/ef9513a7-e937-426c-a701-99c2e327b6e5-kube-api-access-gkpcg\") pod \"hello-world-app-5d498dc89-jjtzn\" (UID: \"ef9513a7-e937-426c-a701-99c2e327b6e5\") " pod="default/hello-world-app-5d498dc89-jjtzn"
	Nov 21 23:51:04 addons-386094 kubelet[1275]: I1121 23:51:04.179914    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ef9513a7-e937-426c-a701-99c2e327b6e5-gcp-creds\") pod \"hello-world-app-5d498dc89-jjtzn\" (UID: \"ef9513a7-e937-426c-a701-99c2e327b6e5\") " pod="default/hello-world-app-5d498dc89-jjtzn"
	Nov 21 23:51:05 addons-386094 kubelet[1275]: I1121 23:51:05.766982    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-jjtzn" podStartSLOduration=0.974960541 podStartE2EDuration="1.766963551s" podCreationTimestamp="2025-11-21 23:51:04 +0000 UTC" firstStartedPulling="2025-11-21 23:51:04.44175263 +0000 UTC m=+251.688318879" lastFinishedPulling="2025-11-21 23:51:05.233755632 +0000 UTC m=+252.480321889" observedRunningTime="2025-11-21 23:51:05.76561111 +0000 UTC m=+253.012177369" watchObservedRunningTime="2025-11-21 23:51:05.766963551 +0000 UTC m=+253.013529813"
	
	
	==> storage-provisioner [a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd] <==
	W1121 23:50:41.143792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:43.146207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:43.150982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:45.154683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:45.158948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:47.161606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:47.165004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:49.167249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:49.170297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:51.172719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:51.177232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:53.179954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:53.183286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:55.185776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:55.189122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:57.191754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:57.195834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:59.197984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:59.201001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:01.203859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:01.207102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:03.209635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:03.213195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:05.215602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:05.219423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-386094 -n addons-386094
helpers_test.go:269: (dbg) Run:  kubectl --context addons-386094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-386094 describe pod ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-386094 describe pod ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx: exit status 1 (52.99209ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8z425" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tztpx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-386094 describe pod ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (227.392003ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:06.442221   30480 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:06.442520   30480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.442530   30480 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:06.442534   30480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.442722   30480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:51:06.442981   30480 mustload.go:66] Loading cluster: addons-386094
	I1121 23:51:06.443293   30480 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.443309   30480 addons.go:622] checking whether the cluster is paused
	I1121 23:51:06.443387   30480 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.443399   30480 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:51:06.443741   30480 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:51:06.461033   30480 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:06.461095   30480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:51:06.477137   30480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:51:06.564935   30480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:06.565003   30480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:06.591903   30480 cri.go:89] found id: "dff6365d0be30ed199de6b25d353c9b393e54c67e2bc38e500a7973349a8dc37"
	I1121 23:51:06.591931   30480 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:51:06.591939   30480 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:51:06.591945   30480 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:51:06.591950   30480 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:51:06.591956   30480 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:51:06.591960   30480 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:51:06.591965   30480 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:51:06.591970   30480 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:51:06.591986   30480 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:51:06.591994   30480 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:51:06.592000   30480 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:51:06.592007   30480 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:51:06.592012   30480 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:51:06.592019   30480 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:51:06.592031   30480 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:51:06.592041   30480 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:51:06.592048   30480 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:51:06.592075   30480 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:51:06.592080   30480 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:51:06.592088   30480 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:51:06.592092   30480 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:51:06.592099   30480 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:51:06.592104   30480 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:51:06.592112   30480 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:51:06.592116   30480 cri.go:89] found id: ""
	I1121 23:51:06.592178   30480 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:06.605383   30480 out.go:203] 
	W1121 23:51:06.606430   30480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:06.606443   30480 out.go:285] * 
	* 
	W1121 23:51:06.609407   30480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:06.610623   30480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable ingress --alsologtostderr -v=1: exit status 11 (225.274518ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:06.668573   30541 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:06.668751   30541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.668765   30541 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:06.668772   30541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.669018   30541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:51:06.669348   30541 mustload.go:66] Loading cluster: addons-386094
	I1121 23:51:06.669681   30541 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.669697   30541 addons.go:622] checking whether the cluster is paused
	I1121 23:51:06.669775   30541 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.669787   30541 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:51:06.670189   30541 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:51:06.686999   30541 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:06.687040   30541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:51:06.703153   30541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:51:06.789854   30541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:06.789918   30541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:06.817192   30541 cri.go:89] found id: "dff6365d0be30ed199de6b25d353c9b393e54c67e2bc38e500a7973349a8dc37"
	I1121 23:51:06.817210   30541 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:51:06.817214   30541 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:51:06.817217   30541 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:51:06.817219   30541 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:51:06.817223   30541 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:51:06.817226   30541 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:51:06.817229   30541 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:51:06.817232   30541 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:51:06.817251   30541 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:51:06.817257   30541 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:51:06.817260   30541 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:51:06.817264   30541 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:51:06.817266   30541 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:51:06.817269   30541 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:51:06.817276   30541 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:51:06.817280   30541 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:51:06.817284   30541 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:51:06.817287   30541 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:51:06.817290   30541 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:51:06.817295   30541 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:51:06.817298   30541 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:51:06.817300   30541 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:51:06.817303   30541 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:51:06.817310   30541 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:51:06.817313   30541 cri.go:89] found id: ""
	I1121 23:51:06.817347   30541 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:06.830826   30541 out.go:203] 
	W1121 23:51:06.832011   30541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:06.832029   30541 out.go:285] * 
	* 
	W1121 23:51:06.834988   30541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:06.836370   30541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pjh9l" [22587cb9-b108-4421-a430-b02eff160e8d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003515853s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (249.54254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:44.693315   26942 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:44.693486   26942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.693505   26942 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:44.693511   26942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.693759   26942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:44.694072   26942 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:44.694434   26942 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.694450   26942 addons.go:622] checking whether the cluster is paused
	I1121 23:48:44.694554   26942 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.694571   26942 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:44.694958   26942 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:44.715355   26942 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:44.715397   26942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:44.736565   26942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:44.829570   26942 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:44.829655   26942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:44.860512   26942 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:44.860540   26942 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:44.860546   26942 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:44.860552   26942 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:44.860556   26942 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:44.860561   26942 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:44.860565   26942 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:44.860569   26942 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:44.860573   26942 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:44.860581   26942 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:44.860585   26942 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:44.860590   26942 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:44.860594   26942 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:44.860598   26942 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:44.860604   26942 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:44.860616   26942 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:44.860625   26942 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:44.860632   26942 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:44.860636   26942 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:44.860639   26942 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:44.860654   26942 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:44.860658   26942 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:44.860661   26942 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:44.860666   26942 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:44.860671   26942 cri.go:89] found id: ""
	I1121 23:48:44.860716   26942 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:44.875271   26942 out.go:203] 
	W1121 23:48:44.876332   26942 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:44.876355   26942 out.go:285] * 
	* 
	W1121 23:48:44.880252   26942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:44.881360   26942 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.739593ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002348009s
addons_test.go:463: (dbg) Run:  kubectl --context addons-386094 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (225.510254ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:32.042750   25052 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:32.042999   25052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:32.043009   25052 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:32.043013   25052 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:32.043232   25052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:32.043480   25052 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:32.043769   25052 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:32.043783   25052 addons.go:622] checking whether the cluster is paused
	I1121 23:48:32.043862   25052 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:32.043873   25052 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:32.044247   25052 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:32.061142   25052 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:32.061192   25052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:32.077632   25052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:32.164860   25052 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:32.164944   25052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:32.192333   25052 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:32.192350   25052 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:32.192354   25052 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:32.192358   25052 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:32.192361   25052 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:32.192364   25052 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:32.192366   25052 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:32.192369   25052 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:32.192372   25052 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:32.192377   25052 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:32.192379   25052 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:32.192382   25052 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:32.192385   25052 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:32.192389   25052 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:32.192392   25052 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:32.192399   25052 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:32.192402   25052 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:32.192406   25052 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:32.192409   25052 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:32.192412   25052 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:32.192415   25052 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:32.192417   25052 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:32.192420   25052 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:32.192423   25052 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:32.192426   25052 cri.go:89] found id: ""
	I1121 23:48:32.192458   25052 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:32.205240   25052 out.go:203] 
	W1121 23:48:32.206166   25052 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:32.206188   25052 out.go:285] * 
	* 
	W1121 23:48:32.209047   25052 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:32.210026   25052 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 23:48:38.411371   14585 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 23:48:38.414437   14585 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:48:38.414460   14585 kapi.go:107] duration metric: took 3.118801ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.128806ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-386094 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-386094 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [28870e4c-a94d-431d-b57d-bab37df06ce5] Pending
helpers_test.go:352: "task-pv-pod" [28870e4c-a94d-431d-b57d-bab37df06ce5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [28870e4c-a94d-431d-b57d-bab37df06ce5] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003390657s
addons_test.go:572: (dbg) Run:  kubectl --context addons-386094 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-386094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-386094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-386094 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-386094 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-386094 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-386094 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e] Pending
helpers_test.go:352: "task-pv-pod-restore" [0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0f9e8b68-5a5a-4fd0-b893-dc7cdbb7222e] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003505635s
addons_test.go:614: (dbg) Run:  kubectl --context addons-386094 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-386094 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-386094 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (228.005532ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:49:26.797274   28306 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:49:26.797551   28306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:49:26.797561   28306 out.go:374] Setting ErrFile to fd 2...
	I1121 23:49:26.797565   28306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:49:26.797757   28306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:49:26.797969   28306 mustload.go:66] Loading cluster: addons-386094
	I1121 23:49:26.798258   28306 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:49:26.798271   28306 addons.go:622] checking whether the cluster is paused
	I1121 23:49:26.798349   28306 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:49:26.798361   28306 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:49:26.798725   28306 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:49:26.816595   28306 ssh_runner.go:195] Run: systemctl --version
	I1121 23:49:26.816644   28306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:49:26.833873   28306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:49:26.922021   28306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:49:26.922106   28306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:49:26.949023   28306 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:49:26.949040   28306 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:49:26.949044   28306 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:49:26.949047   28306 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:49:26.949071   28306 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:49:26.949079   28306 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:49:26.949084   28306 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:49:26.949088   28306 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:49:26.949093   28306 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:49:26.949109   28306 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:49:26.949116   28306 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:49:26.949119   28306 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:49:26.949121   28306 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:49:26.949124   28306 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:49:26.949127   28306 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:49:26.949159   28306 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:49:26.949168   28306 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:49:26.949172   28306 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:49:26.949175   28306 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:49:26.949178   28306 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:49:26.949181   28306 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:49:26.949183   28306 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:49:26.949186   28306 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:49:26.949189   28306 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:49:26.949191   28306 cri.go:89] found id: ""
	I1121 23:49:26.949226   28306 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:49:26.962617   28306 out.go:203] 
	W1121 23:49:26.963764   28306 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:49:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:49:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:49:26.963782   28306 out.go:285] * 
	* 
	W1121 23:49:26.966768   28306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:49:26.967746   28306 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (227.555194ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:49:27.028022   28366 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:49:27.028178   28366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:49:27.028188   28366 out.go:374] Setting ErrFile to fd 2...
	I1121 23:49:27.028191   28366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:49:27.028403   28366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:49:27.028652   28366 mustload.go:66] Loading cluster: addons-386094
	I1121 23:49:27.028940   28366 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:49:27.028954   28366 addons.go:622] checking whether the cluster is paused
	I1121 23:49:27.029030   28366 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:49:27.029042   28366 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:49:27.029410   28366 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:49:27.047027   28366 ssh_runner.go:195] Run: systemctl --version
	I1121 23:49:27.047105   28366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:49:27.063729   28366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:49:27.150849   28366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:49:27.150942   28366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:49:27.177938   28366 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:49:27.177958   28366 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:49:27.177964   28366 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:49:27.177969   28366 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:49:27.177974   28366 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:49:27.177983   28366 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:49:27.177997   28366 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:49:27.178004   28366 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:49:27.178007   28366 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:49:27.178012   28366 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:49:27.178017   28366 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:49:27.178020   28366 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:49:27.178022   28366 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:49:27.178025   28366 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:49:27.178029   28366 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:49:27.178033   28366 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:49:27.178038   28366 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:49:27.178042   28366 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:49:27.178045   28366 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:49:27.178048   28366 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:49:27.178075   28366 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:49:27.178081   28366 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:49:27.178086   28366 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:49:27.178090   28366 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:49:27.178094   28366 cri.go:89] found id: ""
	I1121 23:49:27.178133   28366 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:49:27.190684   28366 out.go:203] 
	W1121 23:49:27.191758   28366 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:49:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:49:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:49:27.191774   28366 out.go:285] * 
	* 
	W1121 23:49:27.194708   28366 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:49:27.195754   28366 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (48.79s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-386094 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-386094 --alsologtostderr -v=1: exit status 11 (236.294668ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:26.983621   24216 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:26.983761   24216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:26.983771   24216 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:26.983776   24216 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:26.983964   24216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:26.984292   24216 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:26.984630   24216 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:26.984648   24216 addons.go:622] checking whether the cluster is paused
	I1121 23:48:26.984741   24216 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:26.984757   24216 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:26.985206   24216 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:27.003116   24216 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:27.003153   24216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:27.019714   24216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:27.106815   24216 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:27.106908   24216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:27.135592   24216 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:27.135611   24216 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:27.135617   24216 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:27.135622   24216 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:27.135626   24216 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:27.135634   24216 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:27.135638   24216 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:27.135642   24216 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:27.135647   24216 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:27.135657   24216 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:27.135667   24216 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:27.135672   24216 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:27.135681   24216 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:27.135686   24216 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:27.135694   24216 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:27.135705   24216 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:27.135713   24216 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:27.135719   24216 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:27.135723   24216 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:27.135727   24216 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:27.135735   24216 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:27.135740   24216 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:27.135744   24216 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:27.135749   24216 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:27.135753   24216 cri.go:89] found id: ""
	I1121 23:48:27.135795   24216 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:27.154012   24216 out.go:203] 
	W1121 23:48:27.155017   24216 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:27.155036   24216 out.go:285] * 
	* 
	W1121 23:48:27.158576   24216 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:27.159721   24216 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-386094 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-386094
helpers_test.go:243: (dbg) docker inspect addons-386094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9",
	        "Created": "2025-11-21T23:46:41.209448742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:46:41.245417207Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/hosts",
	        "LogPath": "/var/lib/docker/containers/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9/0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9-json.log",
	        "Name": "/addons-386094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-386094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-386094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a5362377949521c1545f6c039f39d2ec2f463f75d5949ca6db62ddafe2a34f9",
	                "LowerDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f1549353d16c9f3c0f32b3f4e20224e8347fb9673fb8a8a4386811faa028f190/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-386094",
	                "Source": "/var/lib/docker/volumes/addons-386094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-386094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-386094",
	                "name.minikube.sigs.k8s.io": "addons-386094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a9786f385a998a88fa0e13a81952822a6fa54e1ae03327219d6b49a8ca7d36ff",
	            "SandboxKey": "/var/run/docker/netns/a9786f385a99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-386094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "610e21f260125ec5deb3917faa6075970dd2aa22a45c1761187c501484dce43e",
	                    "EndpointID": "6892ee1c5b1a406430c54c7c4ca8097888c2182216d49acd1126d847e7e71d54",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "b6:1c:a4:ed:83:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-386094",
	                        "0a5362377949"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-386094 -n addons-386094
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-386094 logs -n 25: (1.02823892s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-706420 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-706420   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-706420                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-706420   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ -o=json --download-only -p download-only-756396 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-756396   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-756396                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756396   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-706420                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-706420   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-756396                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-756396   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ --download-only -p download-docker-688978 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-688978 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p download-docker-688978                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-688978 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ --download-only -p binary-mirror-604163 --alsologtostderr --binary-mirror http://127.0.0.1:33157 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-604163   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ -p binary-mirror-604163                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-604163   │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ addons  │ enable dashboard -p addons-386094                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ addons  │ disable dashboard -p addons-386094                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ start   │ -p addons-386094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:48 UTC │
	│ addons  │ addons-386094 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ addons-386094 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	│ addons  │ enable headlamp -p addons-386094 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-386094          │ jenkins │ v1.37.0 │ 21 Nov 25 23:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:18.292103   15929 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:18.292387   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:18.292399   15929 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:18.292404   15929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:18.292607   15929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:46:18.293093   15929 out.go:368] Setting JSON to false
	I1121 23:46:18.293959   15929 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1727,"bootTime":1763767051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:18.294011   15929 start.go:143] virtualization: kvm guest
	I1121 23:46:18.295677   15929 out.go:179] * [addons-386094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:18.296745   15929 notify.go:221] Checking for updates...
	I1121 23:46:18.296750   15929 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:46:18.297894   15929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:18.299274   15929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:18.300355   15929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:46:18.301474   15929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:46:18.302426   15929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:46:18.303568   15929 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:18.325487   15929 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:18.325629   15929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:18.383350   15929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:18.374065415 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:18.383490   15929 docker.go:319] overlay module found
	I1121 23:46:18.385043   15929 out.go:179] * Using the docker driver based on user configuration
	I1121 23:46:18.385957   15929 start.go:309] selected driver: docker
	I1121 23:46:18.385970   15929 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:18.385983   15929 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:46:18.386727   15929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:18.439164   15929 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:18.430618832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:18.439331   15929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:18.439542   15929 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:46:18.440908   15929 out.go:179] * Using Docker driver with root privileges
	I1121 23:46:18.441850   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:18.441904   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:18.441913   15929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:18.441968   15929 start.go:353] cluster config:
	{Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 23:46:18.443112   15929 out.go:179] * Starting "addons-386094" primary control-plane node in "addons-386094" cluster
	I1121 23:46:18.444026   15929 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:46:18.445018   15929 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:46:18.445992   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:18.446026   15929 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1121 23:46:18.446038   15929 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:18.446124   15929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:46:18.446156   15929 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1121 23:46:18.446168   15929 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:46:18.446551   15929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json ...
	I1121 23:46:18.446581   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json: {Name:mkb89b922b64e005a66f42b0754d650cb040a056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:18.461600   15929 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:46:18.461699   15929 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:46:18.461714   15929 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:46:18.461718   15929 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:46:18.461724   15929 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:46:18.461731   15929 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from local cache
	I1121 23:46:30.292793   15929 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from cached tarball
	I1121 23:46:30.292842   15929 cache.go:243] Successfully downloaded all kic artifacts
	I1121 23:46:30.292873   15929 start.go:360] acquireMachinesLock for addons-386094: {Name:mk78cef021a6236ff8b6ca4fd56cc6d4acfe96b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:46:30.292959   15929 start.go:364] duration metric: took 68.729µs to acquireMachinesLock for "addons-386094"
	I1121 23:46:30.292982   15929 start.go:93] Provisioning new machine with config: &{Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:30.293046   15929 start.go:125] createHost starting for "" (driver="docker")
	I1121 23:46:30.295251   15929 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 23:46:30.295486   15929 start.go:159] libmachine.API.Create for "addons-386094" (driver="docker")
	I1121 23:46:30.295516   15929 client.go:173] LocalClient.Create starting
	I1121 23:46:30.295605   15929 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1121 23:46:30.378245   15929 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1121 23:46:30.423361   15929 cli_runner.go:164] Run: docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 23:46:30.439969   15929 cli_runner.go:211] docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 23:46:30.440067   15929 network_create.go:284] running [docker network inspect addons-386094] to gather additional debugging logs...
	I1121 23:46:30.440092   15929 cli_runner.go:164] Run: docker network inspect addons-386094
	W1121 23:46:30.455228   15929 cli_runner.go:211] docker network inspect addons-386094 returned with exit code 1
	I1121 23:46:30.455251   15929 network_create.go:287] error running [docker network inspect addons-386094]: docker network inspect addons-386094: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-386094 not found
	I1121 23:46:30.455265   15929 network_create.go:289] output of [docker network inspect addons-386094]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-386094 not found
	
	** /stderr **
	I1121 23:46:30.455344   15929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:46:30.471011   15929 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d6f4f0}
	I1121 23:46:30.471041   15929 network_create.go:124] attempt to create docker network addons-386094 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 23:46:30.471103   15929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-386094 addons-386094
	I1121 23:46:30.515114   15929 network_create.go:108] docker network addons-386094 192.168.49.0/24 created
	I1121 23:46:30.515142   15929 kic.go:121] calculated static IP "192.168.49.2" for the "addons-386094" container
	I1121 23:46:30.515193   15929 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 23:46:30.529426   15929 cli_runner.go:164] Run: docker volume create addons-386094 --label name.minikube.sigs.k8s.io=addons-386094 --label created_by.minikube.sigs.k8s.io=true
	I1121 23:46:30.545238   15929 oci.go:103] Successfully created a docker volume addons-386094
	I1121 23:46:30.545300   15929 cli_runner.go:164] Run: docker run --rm --name addons-386094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --entrypoint /usr/bin/test -v addons-386094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1121 23:46:36.901445   15929 cli_runner.go:217] Completed: docker run --rm --name addons-386094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --entrypoint /usr/bin/test -v addons-386094:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (6.356071954s)
	I1121 23:46:36.901480   15929 oci.go:107] Successfully prepared a docker volume addons-386094
	I1121 23:46:36.901549   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:36.901561   15929 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 23:46:36.901633   15929 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-386094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 23:46:41.132842   15929 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-386094:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.231119506s)
	I1121 23:46:41.132879   15929 kic.go:203] duration metric: took 4.231314624s to extract preloaded images to volume ...
	W1121 23:46:41.132969   15929 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1121 23:46:41.133016   15929 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1121 23:46:41.133091   15929 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 23:46:41.192691   15929 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-386094 --name addons-386094 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386094 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-386094 --network addons-386094 --ip 192.168.49.2 --volume addons-386094:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1121 23:46:41.490558   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Running}}
	I1121 23:46:41.508982   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.526396   15929 cli_runner.go:164] Run: docker exec addons-386094 stat /var/lib/dpkg/alternatives/iptables
	I1121 23:46:41.577985   15929 oci.go:144] the created container "addons-386094" has a running status.
	I1121 23:46:41.578016   15929 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa...
	I1121 23:46:41.733285   15929 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 23:46:41.758148   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.779276   15929 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 23:46:41.779295   15929 kic_runner.go:114] Args: [docker exec --privileged addons-386094 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 23:46:41.827416   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:41.846588   15929 machine.go:94] provisionDockerMachine start ...
	I1121 23:46:41.846672   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:41.865609   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:41.865967   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:41.865989   15929 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:46:41.986351   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-386094
	
	I1121 23:46:41.986383   15929 ubuntu.go:182] provisioning hostname "addons-386094"
	I1121 23:46:41.986445   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.003676   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.003913   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.003936   15929 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-386094 && echo "addons-386094" | sudo tee /etc/hostname
	I1121 23:46:42.132374   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-386094
	
	I1121 23:46:42.132468   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.149711   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.149981   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.150007   15929 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-386094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-386094/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-386094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:46:42.266783   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:46:42.266815   15929 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1121 23:46:42.266837   15929 ubuntu.go:190] setting up certificates
	I1121 23:46:42.266848   15929 provision.go:84] configureAuth start
	I1121 23:46:42.266905   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:42.283092   15929 provision.go:143] copyHostCerts
	I1121 23:46:42.283169   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1121 23:46:42.283345   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1121 23:46:42.283452   15929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1121 23:46:42.283543   15929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.addons-386094 san=[127.0.0.1 192.168.49.2 addons-386094 localhost minikube]
	I1121 23:46:42.388960   15929 provision.go:177] copyRemoteCerts
	I1121 23:46:42.389025   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:46:42.389086   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.405124   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:42.493139   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:46:42.510404   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:46:42.525680   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:46:42.540716   15929 provision.go:87] duration metric: took 273.859034ms to configureAuth
	I1121 23:46:42.540737   15929 ubuntu.go:206] setting minikube options for container-runtime
	I1121 23:46:42.540875   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:42.540964   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.558045   15929 main.go:143] libmachine: Using SSH client type: native
	I1121 23:46:42.558310   15929 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1121 23:46:42.558327   15929 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:46:42.797870   15929 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:46:42.797894   15929 machine.go:97] duration metric: took 951.286104ms to provisionDockerMachine
	I1121 23:46:42.797908   15929 client.go:176] duration metric: took 12.502380531s to LocalClient.Create
	I1121 23:46:42.797922   15929 start.go:167] duration metric: took 12.502437401s to libmachine.API.Create "addons-386094"
	I1121 23:46:42.797929   15929 start.go:293] postStartSetup for "addons-386094" (driver="docker")
	I1121 23:46:42.797940   15929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:46:42.797999   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:46:42.798037   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.814510   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:42.903494   15929 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:46:42.906654   15929 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 23:46:42.906684   15929 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 23:46:42.906697   15929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1121 23:46:42.906753   15929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1121 23:46:42.906785   15929 start.go:296] duration metric: took 108.849723ms for postStartSetup
	I1121 23:46:42.907086   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:42.923462   15929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/config.json ...
	I1121 23:46:42.923694   15929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:46:42.923732   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:42.939466   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.023230   15929 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 23:46:43.027662   15929 start.go:128] duration metric: took 12.734585583s to createHost
	I1121 23:46:43.027683   15929 start.go:83] releasing machines lock for "addons-386094", held for 12.734711993s
	I1121 23:46:43.027750   15929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386094
	I1121 23:46:43.043675   15929 ssh_runner.go:195] Run: cat /version.json
	I1121 23:46:43.043720   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:43.043794   15929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:46:43.043854   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:43.060551   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.060949   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:43.198978   15929 ssh_runner.go:195] Run: systemctl --version
	I1121 23:46:43.204608   15929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:46:43.234919   15929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:46:43.238856   15929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:46:43.238914   15929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:46:43.261899   15929 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1121 23:46:43.261922   15929 start.go:496] detecting cgroup driver to use...
	I1121 23:46:43.261965   15929 detect.go:190] detected "systemd" cgroup driver on host os
	I1121 23:46:43.262008   15929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:46:43.275337   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:46:43.285713   15929 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:46:43.285760   15929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:46:43.299864   15929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:46:43.315045   15929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:46:43.386339   15929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:46:43.469439   15929 docker.go:234] disabling docker service ...
	I1121 23:46:43.469500   15929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:46:43.484915   15929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:46:43.495673   15929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:46:43.575976   15929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:46:43.650631   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:46:43.661352   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:46:43.673818   15929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:46:43.673870   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.682806   15929 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1121 23:46:43.682845   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.690423   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.697825   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.705363   15929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:46:43.712719   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.720133   15929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.731825   15929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:46:43.739337   15929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:46:43.745760   15929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1121 23:46:43.745806   15929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1121 23:46:43.756159   15929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:46:43.763083   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:43.835464   15929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:46:43.960021   15929 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:46:43.960116   15929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:46:43.963715   15929 start.go:564] Will wait 60s for crictl version
	I1121 23:46:43.963774   15929 ssh_runner.go:195] Run: which crictl
	I1121 23:46:43.966834   15929 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 23:46:43.989202   15929 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 23:46:43.989285   15929 ssh_runner.go:195] Run: crio --version
	I1121 23:46:44.014427   15929 ssh_runner.go:195] Run: crio --version
	I1121 23:46:44.041204   15929 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 23:46:44.042335   15929 cli_runner.go:164] Run: docker network inspect addons-386094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:46:44.058518   15929 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 23:46:44.062074   15929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:46:44.071550   15929 kubeadm.go:884] updating cluster {Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:46:44.071661   15929 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:46:44.071697   15929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:46:44.100242   15929 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:46:44.100257   15929 crio.go:433] Images already preloaded, skipping extraction
	I1121 23:46:44.100291   15929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:46:44.123156   15929 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:46:44.123172   15929 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:46:44.123179   15929 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 23:46:44.123261   15929 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-386094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:46:44.123316   15929 ssh_runner.go:195] Run: crio config
	I1121 23:46:44.162965   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:44.162985   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:44.163003   15929 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:46:44.163025   15929 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-386094 NodeName:addons-386094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:46:44.163178   15929 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-386094"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:46:44.163230   15929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:46:44.170260   15929 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:46:44.170305   15929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:46:44.177200   15929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 23:46:44.188504   15929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:46:44.201639   15929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1121 23:46:44.212529   15929 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 23:46:44.215550   15929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:46:44.224305   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:44.297273   15929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:46:44.319790   15929 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094 for IP: 192.168.49.2
	I1121 23:46:44.319811   15929 certs.go:195] generating shared ca certs ...
	I1121 23:46:44.319827   15929 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.319944   15929 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1121 23:46:44.348846   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt ...
	I1121 23:46:44.348867   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt: {Name:mkea849deea592b6bfe00d3ded9d602ecb5c2ed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.349001   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key ...
	I1121 23:46:44.349012   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key: {Name:mke9cb529f46b649a6be1ccb61fe02278e3a93d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.349094   15929 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1121 23:46:44.446982   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt ...
	I1121 23:46:44.447008   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt: {Name:mk10973c1cd72755f24858e36c099a37cb8141d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.447165   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key ...
	I1121 23:46:44.447177   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key: {Name:mk06eca966f5e85631f257972802afd34e2b6c55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.447245   15929 certs.go:257] generating profile certs ...
	I1121 23:46:44.447323   15929 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key
	I1121 23:46:44.447343   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt with IP's: []
	I1121 23:46:44.500003   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt ...
	I1121 23:46:44.500025   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: {Name:mk4db81f59767802317cd84b2c8d5697fd53f0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.500183   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key ...
	I1121 23:46:44.500196   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.key: {Name:mk2ef87ba4917233426635eff4f07a22bcc4a4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.500268   15929 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28
	I1121 23:46:44.500287   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 23:46:44.643185   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 ...
	I1121 23:46:44.643210   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28: {Name:mkc2d45a6f019b81d14713bb8042d30c2d6c11cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.643359   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28 ...
	I1121 23:46:44.643373   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28: {Name:mk03bbf321a6e9ef5aae40d73128d4835b02eebe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.643446   15929 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt.d0250c28 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt
	I1121 23:46:44.643544   15929 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key.d0250c28 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key
	I1121 23:46:44.643601   15929 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key
	I1121 23:46:44.643620   15929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt with IP's: []
	I1121 23:46:44.760579   15929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt ...
	I1121 23:46:44.760602   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt: {Name:mk106507d6c193afb25675870f56a1094f6b6311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.760746   15929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key ...
	I1121 23:46:44.760758   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key: {Name:mkac8987c15993b47810e86bf565985768b08430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:44.760920   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1121 23:46:44.760956   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:46:44.760981   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:46:44.761004   15929 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1121 23:46:44.761643   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:46:44.778368   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1121 23:46:44.793741   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:46:44.808981   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 23:46:44.824472   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:46:44.839774   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:46:44.855082   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:46:44.870233   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:46:44.885239   15929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:46:44.902155   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:46:44.912847   15929 ssh_runner.go:195] Run: openssl version
	I1121 23:46:44.918206   15929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:46:44.927400   15929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.930504   15929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.930559   15929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:46:44.962792   15929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:46:44.969807   15929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:46:44.972737   15929 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:46:44.972775   15929 kubeadm.go:401] StartCluster: {Name:addons-386094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-386094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:46:44.972842   15929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:46:44.972875   15929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:46:44.997233   15929 cri.go:89] found id: ""
	I1121 23:46:44.997291   15929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:46:45.004184   15929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:46:45.011224   15929 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 23:46:45.011260   15929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:46:45.017870   15929 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:46:45.017887   15929 kubeadm.go:158] found existing configuration files:
	
	I1121 23:46:45.017920   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:46:45.024537   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:46:45.024575   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:46:45.030826   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:46:45.037401   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:46:45.037451   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:46:45.043693   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:46:45.050213   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:46:45.050252   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:46:45.056468   15929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:46:45.062900   15929 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:46:45.062947   15929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:46:45.069230   15929 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 23:46:45.102343   15929 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:46:45.102409   15929 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:46:45.132265   15929 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 23:46:45.132353   15929 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1121 23:46:45.132405   15929 kubeadm.go:319] OS: Linux
	I1121 23:46:45.132466   15929 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 23:46:45.132544   15929 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 23:46:45.132607   15929 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 23:46:45.132699   15929 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 23:46:45.132780   15929 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 23:46:45.132849   15929 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 23:46:45.132952   15929 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 23:46:45.133021   15929 kubeadm.go:319] CGROUPS_IO: enabled
	I1121 23:46:45.182928   15929 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:46:45.183069   15929 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:46:45.183205   15929 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:46:45.190330   15929 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:46:45.192225   15929 out.go:252]   - Generating certificates and keys ...
	I1121 23:46:45.192296   15929 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:46:45.192375   15929 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:46:45.295275   15929 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:46:45.370011   15929 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:46:45.508369   15929 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:46:45.652707   15929 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:46:46.166082   15929 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:46:46.166251   15929 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-386094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:46:46.599889   15929 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:46:46.600023   15929 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-386094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:46:46.726393   15929 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:46:46.902957   15929 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:46:46.994307   15929 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:46:46.994404   15929 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:46:47.032304   15929 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:46:47.250238   15929 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:46:47.381814   15929 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:46:47.593080   15929 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:46:48.059588   15929 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:46:48.060027   15929 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:46:48.063525   15929 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:46:48.064795   15929 out.go:252]   - Booting up control plane ...
	I1121 23:46:48.064921   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:46:48.065021   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:46:48.066549   15929 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:46:48.094696   15929 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:46:48.094834   15929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:46:48.100583   15929 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:46:48.100863   15929 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:46:48.100922   15929 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:46:48.191838   15929 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:46:48.191973   15929 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:46:49.194089   15929 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001820793s
	I1121 23:46:49.197816   15929 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:46:49.197938   15929 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 23:46:49.198080   15929 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:46:49.198213   15929 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:46:50.442836   15929 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.244587389s
	I1121 23:46:50.444810   15929 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.247001708s
	I1121 23:46:52.199819   15929 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001791434s
	I1121 23:46:52.210782   15929 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:46:52.220431   15929 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:46:52.227371   15929 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:46:52.227597   15929 kubeadm.go:319] [mark-control-plane] Marking the node addons-386094 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:46:52.234188   15929 kubeadm.go:319] [bootstrap-token] Using token: 97huse.e9m5pfe7tq8jbjm1
	I1121 23:46:52.235480   15929 out.go:252]   - Configuring RBAC rules ...
	I1121 23:46:52.235631   15929 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:46:52.237950   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:46:52.242323   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:46:52.244538   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:46:52.246498   15929 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:46:52.249179   15929 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:46:52.605362   15929 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:46:53.017982   15929 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:46:53.604874   15929 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:46:53.605773   15929 kubeadm.go:319] 
	I1121 23:46:53.605866   15929 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:46:53.605876   15929 kubeadm.go:319] 
	I1121 23:46:53.605970   15929 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:46:53.605985   15929 kubeadm.go:319] 
	I1121 23:46:53.606021   15929 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:46:53.606123   15929 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:46:53.606224   15929 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:46:53.606241   15929 kubeadm.go:319] 
	I1121 23:46:53.606304   15929 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:46:53.606313   15929 kubeadm.go:319] 
	I1121 23:46:53.606387   15929 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:46:53.606400   15929 kubeadm.go:319] 
	I1121 23:46:53.606484   15929 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:46:53.606588   15929 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:46:53.606666   15929 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:46:53.606679   15929 kubeadm.go:319] 
	I1121 23:46:53.606797   15929 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:46:53.606903   15929 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:46:53.606913   15929 kubeadm.go:319] 
	I1121 23:46:53.607014   15929 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 97huse.e9m5pfe7tq8jbjm1 \
	I1121 23:46:53.607146   15929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1121 23:46:53.607178   15929 kubeadm.go:319] 	--control-plane 
	I1121 23:46:53.607187   15929 kubeadm.go:319] 
	I1121 23:46:53.607281   15929 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:46:53.607289   15929 kubeadm.go:319] 
	I1121 23:46:53.607391   15929 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 97huse.e9m5pfe7tq8jbjm1 \
	I1121 23:46:53.607514   15929 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1121 23:46:53.609346   15929 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1121 23:46:53.609463   15929 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:46:53.609493   15929 cni.go:84] Creating CNI manager for ""
	I1121 23:46:53.609503   15929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:46:53.610941   15929 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 23:46:53.611994   15929 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 23:46:53.615945   15929 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 23:46:53.615960   15929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 23:46:53.628509   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 23:46:53.815999   15929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:46:53.816092   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:53.816139   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-386094 minikube.k8s.io/updated_at=2025_11_21T23_46_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-386094 minikube.k8s.io/primary=true
	I1121 23:46:53.883995   15929 ops.go:34] apiserver oom_adj: -16
	I1121 23:46:53.884013   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:54.384046   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:54.884158   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:55.384557   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:55.884239   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:56.384703   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:56.884350   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:57.384087   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:57.884305   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:58.384143   15929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:46:58.443296   15929 kubeadm.go:1114] duration metric: took 4.627258247s to wait for elevateKubeSystemPrivileges
	I1121 23:46:58.443346   15929 kubeadm.go:403] duration metric: took 13.47056512s to StartCluster
	I1121 23:46:58.443370   15929 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:58.443484   15929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:58.443880   15929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:58.444072   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:46:58.444101   15929 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:46:58.444172   15929 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:46:58.444298   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:58.444314   15929 addons.go:70] Setting cloud-spanner=true in profile "addons-386094"
	I1121 23:46:58.444333   15929 addons.go:239] Setting addon cloud-spanner=true in "addons-386094"
	I1121 23:46:58.444343   15929 addons.go:70] Setting registry=true in profile "addons-386094"
	I1121 23:46:58.444332   15929 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-386094"
	I1121 23:46:58.444353   15929 addons.go:70] Setting volcano=true in profile "addons-386094"
	I1121 23:46:58.444361   15929 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-386094"
	I1121 23:46:58.444363   15929 addons.go:239] Setting addon volcano=true in "addons-386094"
	I1121 23:46:58.444369   15929 addons.go:70] Setting volumesnapshots=true in profile "addons-386094"
	I1121 23:46:58.444371   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444379   15929 addons.go:239] Setting addon volumesnapshots=true in "addons-386094"
	I1121 23:46:58.444382   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444371   15929 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-386094"
	I1121 23:46:58.444410   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444411   15929 addons.go:70] Setting inspektor-gadget=true in profile "addons-386094"
	I1121 23:46:58.444443   15929 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-386094"
	I1121 23:46:58.444474   15929 addons.go:239] Setting addon inspektor-gadget=true in "addons-386094"
	I1121 23:46:58.444485   15929 addons.go:70] Setting default-storageclass=true in profile "addons-386094"
	I1121 23:46:58.444493   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444307   15929 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-386094"
	I1121 23:46:58.444509   15929 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-386094"
	I1121 23:46:58.444514   15929 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-386094"
	I1121 23:46:58.444526   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444795   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444792   15929 addons.go:70] Setting metrics-server=true in profile "addons-386094"
	I1121 23:46:58.444802   15929 addons.go:70] Setting registry-creds=true in profile "addons-386094"
	I1121 23:46:58.444815   15929 addons.go:239] Setting addon registry-creds=true in "addons-386094"
	I1121 23:46:58.444823   15929 addons.go:239] Setting addon metrics-server=true in "addons-386094"
	I1121 23:46:58.444845   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444856   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444938   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444941   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444945   15929 addons.go:70] Setting ingress-dns=true in profile "addons-386094"
	I1121 23:46:58.444954   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444959   15929 addons.go:239] Setting addon ingress-dns=true in "addons-386094"
	I1121 23:46:58.444985   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.445251   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.445464   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444795   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446274   15929 addons.go:70] Setting gcp-auth=true in profile "addons-386094"
	I1121 23:46:58.446432   15929 mustload.go:66] Loading cluster: addons-386094
	I1121 23:46:58.444404   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444299   15929 addons.go:70] Setting ingress=true in profile "addons-386094"
	I1121 23:46:58.444938   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.444336   15929 addons.go:70] Setting yakd=true in profile "addons-386094"
	I1121 23:46:58.446285   15929 addons.go:70] Setting storage-provisioner=true in profile "addons-386094"
	I1121 23:46:58.446460   15929 addons.go:239] Setting addon storage-provisioner=true in "addons-386094"
	I1121 23:46:58.446485   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.445472   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446684   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446932   15929 addons.go:239] Setting addon ingress=true in "addons-386094"
	I1121 23:46:58.447323   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.444361   15929 addons.go:239] Setting addon registry=true in "addons-386094"
	I1121 23:46:58.447451   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.446946   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.446355   15929 out.go:179] * Verifying Kubernetes components...
	I1121 23:46:58.447103   15929 addons.go:239] Setting addon yakd=true in "addons-386094"
	I1121 23:46:58.448074   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.446401   15929 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-386094"
	I1121 23:46:58.448439   15929 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-386094"
	I1121 23:46:58.448465   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.448933   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.451309   15929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:46:58.451952   15929 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:46:58.452283   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.453942   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.452612   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.456430   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.457063   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	W1121 23:46:58.502004   15929 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:46:58.523888   15929 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:46:58.525013   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:46:58.525108   15929 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:46:58.525529   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:46:58.525597   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.525202   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:46:58.526887   15929 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:46:58.528094   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:46:58.528517   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:46:58.528808   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:46:58.528987   15929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:46:58.529448   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.531883   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:46:58.532235   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:46:58.534585   15929 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:46:58.534602   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:46:58.534630   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:46:58.536098   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:46:58.536181   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.536780   15929 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-386094"
	I1121 23:46:58.536829   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.537289   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.537549   15929 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:46:58.538212   15929 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:46:58.538840   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:46:58.538906   15929 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:46:58.539158   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 23:46:58.539209   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.539505   15929 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:46:58.539517   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:46:58.539559   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.543263   15929 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:46:58.543364   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:46:58.544447   15929 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:46:58.544463   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:46:58.544508   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.546095   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:46:58.546962   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:46:58.546979   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:46:58.547033   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.561409   15929 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:46:58.561502   15929 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:46:58.563155   15929 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:46:58.563178   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:46:58.563242   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.566681   15929 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:46:58.566761   15929 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:46:58.568252   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:46:58.568275   15929 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:46:58.568354   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.568908   15929 addons.go:239] Setting addon default-storageclass=true in "addons-386094"
	I1121 23:46:58.570673   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.571183   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:46:58.570083   15929 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:46:58.570160   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:46:58.571608   15929 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:46:58.571678   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.572604   15929 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:46:58.572753   15929 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:46:58.572765   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:46:58.572823   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.581648   15929 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:46:58.581765   15929 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:46:58.581776   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:46:58.581836   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.583984   15929 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:46:58.584004   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:46:58.584068   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.587136   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.587159   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:46:58.598449   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.599323   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.602964   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.621153   15929 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:46:58.622064   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.622121   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.626715   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.627492   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.629102   15929 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:46:58.630229   15929 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:46:58.630392   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:46:58.630570   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.642358   15929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:46:58.643869   15929 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:46:58.643901   15929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:46:58.644041   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.644073   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:46:58.653096   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.654086   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.654156   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.655207   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	W1121 23:46:58.660914   15929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:46:58.660964   15929 retry.go:31] will retry after 256.482979ms: ssh: handshake failed: EOF
	W1121 23:46:58.662139   15929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:46:58.662192   15929 retry.go:31] will retry after 191.458542ms: ssh: handshake failed: EOF
	I1121 23:46:58.671531   15929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:46:58.671778   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.689953   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:46:58.762809   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:46:58.762839   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:46:58.765257   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:46:58.773684   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:46:58.774413   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:46:58.774432   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:46:58.783583   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:46:58.783611   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:46:58.794330   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:46:58.797122   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:46:58.800341   15929 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:46:58.800361   15929 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:46:58.806079   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:46:58.806099   15929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:46:58.806193   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:46:58.817313   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:46:58.821002   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:46:58.821025   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:46:58.823548   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:46:58.823571   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:46:58.830679   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:46:58.830698   15929 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:46:58.831863   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:46:58.845846   15929 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:46:58.845869   15929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:46:58.846019   15929 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:46:58.846028   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:46:58.857371   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:46:58.857391   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:46:58.861419   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:46:58.865549   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:46:58.865582   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:46:58.874385   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:46:58.874405   15929 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:46:58.883886   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:46:58.893503   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:46:58.904641   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:46:58.904676   15929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:46:58.922102   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:46:58.922150   15929 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:46:58.924602   15929 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:46:58.924625   15929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:46:58.945444   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:46:58.945476   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:46:58.960280   15929 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:46:58.960323   15929 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:46:58.975207   15929 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:46:58.975232   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:46:59.010143   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:46:59.010191   15929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:46:59.020912   15929 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 23:46:59.021842   15929 node_ready.go:35] waiting up to 6m0s for node "addons-386094" to be "Ready" ...
	I1121 23:46:59.022331   15929 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:46:59.022346   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:46:59.032523   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:46:59.054529   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:46:59.054842   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:46:59.054859   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:46:59.101439   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:46:59.101466   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:46:59.101632   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:46:59.111496   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:46:59.167602   15929 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:46:59.167627   15929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:46:59.244662   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:46:59.530246   15929 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-386094" context rescaled to 1 replicas
	I1121 23:46:59.775150   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.009853516s)
	I1121 23:46:59.775233   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.001522528s)
	I1121 23:46:59.954152   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.147915951s)
	I1121 23:46:59.954188   15929 addons.go:495] Verifying addon ingress=true in "addons-386094"
	I1121 23:46:59.954186   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.136843097s)
	I1121 23:46:59.954272   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.122381541s)
	I1121 23:46:59.954302   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.092865741s)
	I1121 23:46:59.954498   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.060959987s)
	I1121 23:46:59.954519   15929 addons.go:495] Verifying addon metrics-server=true in "addons-386094"
	I1121 23:46:59.954357   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.070441742s)
	I1121 23:46:59.954554   15929 addons.go:495] Verifying addon registry=true in "addons-386094"
	I1121 23:46:59.955788   15929 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-386094 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:46:59.955798   15929 out.go:179] * Verifying ingress addon...
	I1121 23:46:59.955826   15929 out.go:179] * Verifying registry addon...
	I1121 23:46:59.957996   15929 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:46:59.957998   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1121 23:46:59.960784   15929 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1121 23:46:59.961150   15929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:46:59.961168   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:46:59.961516   15929 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:46:59.961536   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:00.371483   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.259935961s)
	W1121 23:47:00.371541   15929 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:00.371571   15929 retry.go:31] will retry after 289.175888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:47:00.371726   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.127019443s)
	I1121 23:47:00.371750   15929 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-386094"
	I1121 23:47:00.374135   15929 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:47:00.376184   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:47:00.378809   15929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:00.378831   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:00.478846   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:00.478999   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:00.661278   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:47:00.878893   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:00.960837   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:00.960841   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:01.024485   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:01.379602   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:01.480301   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:01.480378   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:01.878634   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:01.960658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:01.960758   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:02.379620   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:02.480501   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:02.480622   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:02.885347   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:02.960368   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:02.960481   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:03.081892   15929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.420572351s)
	I1121 23:47:03.379590   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:03.480548   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:03.480834   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:03.524341   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:03.878731   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:03.960510   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:03.960680   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:04.379233   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:04.479498   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:04.479705   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:04.878727   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:04.961086   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:04.961260   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:05.379272   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:05.479932   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:05.480254   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:05.879422   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:05.960181   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:05.960382   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:06.025286   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:06.198942   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:47:06.199010   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:47:06.216036   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:47:06.309156   15929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:47:06.320548   15929 addons.go:239] Setting addon gcp-auth=true in "addons-386094"
	I1121 23:47:06.320596   15929 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:47:06.320932   15929 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:47:06.338034   15929 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:47:06.338100   15929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:47:06.354067   15929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:47:06.379014   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:06.439892   15929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:47:06.440993   15929 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:47:06.441975   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:47:06.441990   15929 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:47:06.453765   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:47:06.453784   15929 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:47:06.461154   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:06.461158   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:06.465652   15929 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:06.465668   15929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:47:06.477261   15929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:47:06.754577   15929 addons.go:495] Verifying addon gcp-auth=true in "addons-386094"
	I1121 23:47:06.755753   15929 out.go:179] * Verifying gcp-auth addon...
	I1121 23:47:06.757521   15929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:47:06.759614   15929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:47:06.759628   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:06.878590   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:06.960504   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:06.960652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.259683   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:07.378748   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:07.460868   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:07.460929   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.759879   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:07.879156   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:07.959923   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:07.959921   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.259868   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:08.379187   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:08.460007   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.460227   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:08.523946   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:08.760248   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:08.878385   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:08.960332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:08.960500   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:09.260347   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:09.378111   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:09.459911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:09.460100   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:09.760034   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:09.879303   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:09.960197   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:09.960351   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:10.260234   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:10.378025   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:10.460956   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:10.461138   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:10.760203   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:10.879320   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:10.960100   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:10.960425   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:11.024075   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:11.260099   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:11.379096   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:11.459911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:11.459987   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:11.759977   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:11.879231   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:11.979739   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:11.979853   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:12.259794   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:12.378843   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:12.460947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:12.461134   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:12.760288   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:12.878484   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:12.960693   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:12.960747   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:13.024616   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:13.259783   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:13.378973   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:13.460794   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:13.460883   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:13.759917   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:13.879209   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:13.960332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:13.960402   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:14.260577   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:14.378840   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:14.460955   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:14.461036   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:14.760011   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:14.879042   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:14.961038   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:14.961199   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:15.260357   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:15.378408   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:15.461115   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:15.461221   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:15.523769   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:15.760085   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:15.879495   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:15.960240   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:15.960456   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:16.260436   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:16.378496   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:16.460497   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:16.460542   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:16.759589   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:16.878652   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:16.960643   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:16.960696   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:17.260358   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:17.378089   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:17.459936   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:17.460074   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:17.523794   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:17.759924   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:17.879156   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:17.960104   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:17.960233   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:18.259905   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:18.379066   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:18.460203   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:18.460404   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:18.760520   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:18.878422   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:18.960451   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:18.960671   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:19.259516   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:19.378406   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:19.460179   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:19.460340   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:19.523954   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:19.760212   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:19.879360   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:19.960291   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:19.960401   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:20.260288   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:20.378169   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:20.460005   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:20.460063   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:20.760000   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:20.879043   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:20.961135   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:20.961371   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:21.260191   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:21.378985   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:21.460898   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:21.461028   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:21.759636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:21.878955   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:21.979389   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:21.979437   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:22.023988   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:22.260524   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:22.378503   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:22.460588   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:22.460756   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:22.759513   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:22.878668   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:22.960900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:22.960902   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:23.259789   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:23.378911   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:23.461158   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:23.461224   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:23.759822   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:23.879050   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:23.960873   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:23.961048   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:24.259909   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:24.379027   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:24.461520   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:24.461667   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:24.524502   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:24.759756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:24.878800   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:24.960807   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:24.960945   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:25.260030   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:25.378943   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:25.460732   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:25.460966   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:25.759623   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:25.878658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:25.960561   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:25.960688   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:26.259476   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:26.378636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:26.460439   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:26.460624   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:26.524569   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:26.759749   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:26.878923   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:26.960757   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:26.960918   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:27.259912   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:27.378917   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:27.460939   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:27.461170   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:27.759763   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:27.878841   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:27.960939   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:27.961120   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:28.259984   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:28.379178   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:28.460032   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:28.460102   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:28.760446   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:28.878635   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:28.960586   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:28.960785   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:29.024466   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:29.259585   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:29.378631   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:29.460568   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:29.460652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:29.760349   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:29.878273   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:29.960234   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:29.960504   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:30.260145   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:30.379023   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:30.460820   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:30.460943   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:30.759900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:30.879079   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:30.960077   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:30.960268   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:31.260125   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:31.379323   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:31.460364   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:31.460412   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:31.524180   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:31.760943   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:31.879091   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:31.961157   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:31.961289   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:32.260290   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:32.378137   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:32.459936   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:32.460111   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:32.759946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:32.878883   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:32.960786   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:32.960923   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:33.259696   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:33.378965   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:33.461023   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:33.461166   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:33.759963   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:33.879272   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:33.960226   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:33.960438   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:34.023881   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:34.260196   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:34.379320   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:34.460126   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:34.460340   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:34.760375   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:34.878222   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:34.960138   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:34.960287   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:35.260333   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:35.378236   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:35.460453   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:35.460662   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:35.760645   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:35.878538   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:35.960521   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:35.960648   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:36.024254   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:36.259901   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:36.378675   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:36.460584   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:36.460765   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:36.759516   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:36.878529   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:36.960591   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:36.960647   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:37.259441   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:37.378332   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:37.460218   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:37.460344   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:37.760346   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:37.878392   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:37.960604   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:37.960792   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1121 23:47:38.024533   15929 node_ready.go:57] node "addons-386094" has "Ready":"False" status (will retry)
	I1121 23:47:38.259672   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:38.378915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:38.461050   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:38.461280   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:38.760048   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:38.879142   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:38.960166   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:38.960300   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:39.260276   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:39.378314   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:39.460260   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:39.460417   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:39.760709   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:39.878801   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:39.961033   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:39.961061   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:40.030795   15929 node_ready.go:49] node "addons-386094" is "Ready"
	I1121 23:47:40.030829   15929 node_ready.go:38] duration metric: took 41.008959779s for node "addons-386094" to be "Ready" ...
	I1121 23:47:40.030849   15929 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:47:40.030987   15929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:47:40.050924   15929 api_server.go:72] duration metric: took 41.606788143s to wait for apiserver process to appear ...
	I1121 23:47:40.050953   15929 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:47:40.050976   15929 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 23:47:40.057780   15929 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 23:47:40.059028   15929 api_server.go:141] control plane version: v1.34.1
	I1121 23:47:40.059074   15929 api_server.go:131] duration metric: took 8.092867ms to wait for apiserver health ...
	I1121 23:47:40.059086   15929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:47:40.063746   15929 system_pods.go:59] 19 kube-system pods found
	I1121 23:47:40.063784   15929 system_pods.go:61] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.063793   15929 system_pods.go:61] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending
	I1121 23:47:40.063801   15929 system_pods.go:61] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending
	I1121 23:47:40.063807   15929 system_pods.go:61] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending
	I1121 23:47:40.063811   15929 system_pods.go:61] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.063816   15929 system_pods.go:61] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.063821   15929 system_pods.go:61] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.063827   15929 system_pods.go:61] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.063837   15929 system_pods.go:61] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending
	I1121 23:47:40.063842   15929 system_pods.go:61] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.063849   15929 system_pods.go:61] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.063854   15929 system_pods.go:61] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending
	I1121 23:47:40.063863   15929 system_pods.go:61] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending
	I1121 23:47:40.063868   15929 system_pods.go:61] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending
	I1121 23:47:40.063889   15929 system_pods.go:61] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending
	I1121 23:47:40.063894   15929 system_pods.go:61] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending
	I1121 23:47:40.063898   15929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending
	I1121 23:47:40.063903   15929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending
	I1121 23:47:40.063908   15929 system_pods.go:61] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending
	I1121 23:47:40.063915   15929 system_pods.go:74] duration metric: took 4.821487ms to wait for pod list to return data ...
	I1121 23:47:40.063927   15929 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:47:40.067093   15929 default_sa.go:45] found service account: "default"
	I1121 23:47:40.067114   15929 default_sa.go:55] duration metric: took 3.176618ms for default service account to be created ...
	I1121 23:47:40.067123   15929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:47:40.071166   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.071192   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending
	I1121 23:47:40.071204   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.071210   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending
	I1121 23:47:40.071217   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending
	I1121 23:47:40.071221   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending
	I1121 23:47:40.071225   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.071231   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.071243   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.071249   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.071258   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending
	I1121 23:47:40.071262   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.071268   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.071273   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending
	I1121 23:47:40.071278   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending
	I1121 23:47:40.071283   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending
	I1121 23:47:40.071291   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending
	I1121 23:47:40.071295   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending
	I1121 23:47:40.071304   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending
	I1121 23:47:40.071308   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending
	I1121 23:47:40.071313   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending
	I1121 23:47:40.071334   15929 retry.go:31] will retry after 269.09938ms: missing components: kube-dns
	I1121 23:47:40.259904   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:40.362843   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.362886   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:40.362898   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.362909   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:40.362917   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:40.362925   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:40.362930   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.362938   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.362944   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.362951   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.362964   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:40.362970   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.362977   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.362984   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:40.362993   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:40.363001   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:40.363011   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:40.363019   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:40.363030   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.363039   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.363048   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:40.363080   15929 retry.go:31] will retry after 366.151557ms: missing components: kube-dns
	I1121 23:47:40.460820   15929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:47:40.460843   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:40.462957   15929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:47:40.462980   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:40.463594   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:40.733225   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:40.733254   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:40.733261   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:47:40.733269   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:40.733276   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:40.733282   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:40.733285   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:40.733290   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:40.733293   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:40.733297   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:40.733303   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:40.733309   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:40.733312   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:40.733317   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:40.733323   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:40.733328   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:40.733334   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:40.733339   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:40.733346   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.733352   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:40.733358   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:47:40.733374   15929 retry.go:31] will retry after 445.528563ms: missing components: kube-dns
	I1121 23:47:40.759396   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:40.878544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:40.960909   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:40.961001   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.183389   15929 system_pods.go:86] 20 kube-system pods found
	I1121 23:47:41.183424   15929 system_pods.go:89] "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1121 23:47:41.183431   15929 system_pods.go:89] "coredns-66bc5c9577-jdqrr" [b648a6f7-a412-4075-bcb6-7fde7db81c13] Running
	I1121 23:47:41.183439   15929 system_pods.go:89] "csi-hostpath-attacher-0" [1f374a63-58b4-4dba-89a4-0d60da730c94] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:47:41.183446   15929 system_pods.go:89] "csi-hostpath-resizer-0" [60799ca4-c6e0-485b-870a-7fe05ffeed25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:47:41.183460   15929 system_pods.go:89] "csi-hostpathplugin-bw962" [19b38a39-2690-45c4-8297-fd5a89899c9b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:47:41.183471   15929 system_pods.go:89] "etcd-addons-386094" [6deab294-ddbf-4424-8887-3b099c8fde2a] Running
	I1121 23:47:41.183478   15929 system_pods.go:89] "kindnet-nhwtc" [ee44da65-a7eb-41f1-88ce-d96fa74d3e79] Running
	I1121 23:47:41.183485   15929 system_pods.go:89] "kube-apiserver-addons-386094" [95082b3d-db11-4152-b4b5-52d089ba95d4] Running
	I1121 23:47:41.183491   15929 system_pods.go:89] "kube-controller-manager-addons-386094" [3d6f340e-d04e-4ed6-98e2-2be22b125ed8] Running
	I1121 23:47:41.183501   15929 system_pods.go:89] "kube-ingress-dns-minikube" [197d21e6-4a2b-4537-be0f-75276bf9a7bd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:47:41.183506   15929 system_pods.go:89] "kube-proxy-bqrb5" [57be0712-5da6-4247-b2be-62513e37470a] Running
	I1121 23:47:41.183512   15929 system_pods.go:89] "kube-scheduler-addons-386094" [d3d7633e-be01-4f85-aaa4-0bad5779b753] Running
	I1121 23:47:41.183527   15929 system_pods.go:89] "metrics-server-85b7d694d7-jj26h" [e0c05e28-f4b5-448b-959e-57c85afe18c7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:47:41.183534   15929 system_pods.go:89] "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:47:41.183540   15929 system_pods.go:89] "registry-6b586f9694-sgqmn" [495ee6a1-7806-44cd-b4c4-c284b112d936] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:47:41.183545   15929 system_pods.go:89] "registry-creds-764b6fb674-hvw4s" [d8248262-bb58-4830-86c5-7e3da3404d7a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:47:41.183552   15929 system_pods.go:89] "registry-proxy-7jwr9" [c7f10f4f-ba0a-466c-9874-8dad438139d6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:47:41.183564   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mfq9f" [ab513e9c-3ef9-4e55-b438-e10a14f09905] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:41.183573   15929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wknk9" [ba2bf066-0739-46c2-9604-b532641c7f44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:47:41.183579   15929 system_pods.go:89] "storage-provisioner" [10f7dd48-8738-4d41-a8ca-3f45d17aec90] Running
	I1121 23:47:41.183590   15929 system_pods.go:126] duration metric: took 1.116460563s to wait for k8s-apps to be running ...
	I1121 23:47:41.183603   15929 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:47:41.183657   15929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:47:41.199631   15929 system_svc.go:56] duration metric: took 16.021076ms WaitForService to wait for kubelet
	I1121 23:47:41.199663   15929 kubeadm.go:587] duration metric: took 42.755532163s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:41.199683   15929 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:47:41.202261   15929 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1121 23:47:41.202290   15929 node_conditions.go:123] node cpu capacity is 8
	I1121 23:47:41.202310   15929 node_conditions.go:105] duration metric: took 2.621029ms to run NodePressure ...
	I1121 23:47:41.202324   15929 start.go:242] waiting for startup goroutines ...
	I1121 23:47:41.260409   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:41.379174   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:41.461432   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.461523   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:41.759738   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:41.878840   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:41.961088   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:41.961373   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.262131   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:42.380880   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.462965   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.463968   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:42.761191   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:42.880321   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:42.960915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:42.960972   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.260461   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:43.379673   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.461296   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.461346   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:43.760838   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:43.879916   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:43.980262   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:43.980272   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.261453   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.379418   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.460688   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:44.460782   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.760788   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:44.879730   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:44.961129   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:44.961293   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.261162   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.379618   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.461108   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.461227   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:45.761666   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:45.879816   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:45.961693   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:45.961862   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.261130   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.380839   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.461415   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.461493   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:46.760544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:46.879624   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:46.961294   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:46.961338   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.261331   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.379504   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.461394   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.461468   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:47.760002   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:47.895537   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:47.996393   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:47.996483   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.261122   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.379446   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.461291   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.461360   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:48.761692   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:48.879636   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:48.961268   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:48.961328   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.261340   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.379825   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.461495   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:49.461550   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.759928   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:49.879350   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:49.960957   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:49.960947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.260981   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.379928   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.461362   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.461417   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:50.760756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:50.880290   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:50.960905   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:50.960927   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.260736   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.380399   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.461484   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:51.461512   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.760380   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:51.879338   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:51.979597   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:51.979805   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.260525   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.378832   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.460930   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.461137   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:52.761371   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:52.879462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:52.961235   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:52.961379   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.260828   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.463866   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:53.463889   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.464003   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.760388   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:53.878817   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:53.961241   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:53.961421   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.260757   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.379665   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.461711   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:54.461751   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.760007   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:54.879937   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:54.963243   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:54.967283   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.260353   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.380685   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.480749   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.480822   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:55.759953   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:55.879711   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:55.960766   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:55.960854   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.260875   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.380022   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.461803   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.461911   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:56.761940   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:56.880019   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:56.960903   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:56.961044   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.260492   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.379172   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.461946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.461984   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:57.760218   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:57.878900   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:57.961253   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:57.961362   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.261182   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.379418   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.460237   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.460266   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:58.761233   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:58.879585   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:58.961207   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:58.961273   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.261455   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.379087   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.461155   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.461295   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:47:59.760587   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:47:59.878947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:47:59.961010   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:47:59.961065   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.260712   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.379571   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.461300   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.461541   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:00.761804   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:00.879663   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:00.961462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:00.961514   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.260484   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.378971   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.461390   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:01.461422   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.759946   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:01.879454   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:01.961013   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:01.961027   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.262195   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.379369   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.460485   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.460491   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:02.760937   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:02.879973   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:02.961658   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:02.961706   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.259929   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.379323   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.460364   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.460578   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:03.759947   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:03.881756   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:03.960783   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:03.960943   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.260674   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.379310   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.461150   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:04.461239   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.761020   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:04.879752   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:04.961193   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:04.961196   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:05.260923   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.379949   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.461629   15929 kapi.go:107] duration metric: took 1m5.503626359s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:48:05.461652   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:05.760188   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:05.879680   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:05.961012   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.261743   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.380093   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.462612   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:06.760430   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:06.879683   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:06.961362   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.261200   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.430755   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.460536   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:07.760244   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:07.879915   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:07.961542   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.259753   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.381230   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.460896   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:08.761159   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:08.879844   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:08.960596   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.260405   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.378956   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.479848   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:09.760544   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:09.881081   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:09.961825   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.260356   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.380867   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.460758   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:10.760462   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:10.879472   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:10.960952   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.260776   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.379538   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.460437   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:11.759731   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:11.878949   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:11.961196   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.260656   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.379133   15929 kapi.go:107] duration metric: took 1m12.00295056s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:48:12.462157   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:12.761820   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:12.961919   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.259977   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.461118   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:13.760764   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:13.965275   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.261134   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.461310   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:14.760846   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:14.961709   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.260930   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.461902   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:15.760528   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:15.961103   15929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:16.262280   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:16.460866   15929 kapi.go:107] duration metric: took 1m16.502863434s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:48:16.760813   15929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:17.261025   15929 kapi.go:107] duration metric: took 1m10.503503111s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:48:17.262207   15929 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-386094 cluster.
	I1121 23:48:17.263247   15929 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:48:17.264248   15929 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:48:17.265365   15929 out.go:179] * Enabled addons: inspektor-gadget, nvidia-device-plugin, registry-creds, cloud-spanner, amd-gpu-device-plugin, metrics-server, storage-provisioner, ingress-dns, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1121 23:48:17.266489   15929 addons.go:530] duration metric: took 1m18.822318032s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin registry-creds cloud-spanner amd-gpu-device-plugin metrics-server storage-provisioner ingress-dns yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1121 23:48:17.266531   15929 start.go:247] waiting for cluster config update ...
	I1121 23:48:17.266560   15929 start.go:256] writing updated cluster config ...
	I1121 23:48:17.266789   15929 ssh_runner.go:195] Run: rm -f paused
	I1121 23:48:17.270366   15929 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:17.272959   15929 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jdqrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.276478   15929 pod_ready.go:94] pod "coredns-66bc5c9577-jdqrr" is "Ready"
	I1121 23:48:17.276500   15929 pod_ready.go:86] duration metric: took 3.522986ms for pod "coredns-66bc5c9577-jdqrr" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.278128   15929 pod_ready.go:83] waiting for pod "etcd-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.281226   15929 pod_ready.go:94] pod "etcd-addons-386094" is "Ready"
	I1121 23:48:17.281244   15929 pod_ready.go:86] duration metric: took 3.096965ms for pod "etcd-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.282692   15929 pod_ready.go:83] waiting for pod "kube-apiserver-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.285808   15929 pod_ready.go:94] pod "kube-apiserver-addons-386094" is "Ready"
	I1121 23:48:17.285826   15929 pod_ready.go:86] duration metric: took 3.118387ms for pod "kube-apiserver-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.287219   15929 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.673662   15929 pod_ready.go:94] pod "kube-controller-manager-addons-386094" is "Ready"
	I1121 23:48:17.673691   15929 pod_ready.go:86] duration metric: took 386.45492ms for pod "kube-controller-manager-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:17.873412   15929 pod_ready.go:83] waiting for pod "kube-proxy-bqrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.301594   15929 pod_ready.go:94] pod "kube-proxy-bqrb5" is "Ready"
	I1121 23:48:18.301625   15929 pod_ready.go:86] duration metric: took 428.1844ms for pod "kube-proxy-bqrb5" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.562111   15929 pod_ready.go:83] waiting for pod "kube-scheduler-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.873927   15929 pod_ready.go:94] pod "kube-scheduler-addons-386094" is "Ready"
	I1121 23:48:18.873953   15929 pod_ready.go:86] duration metric: took 311.814985ms for pod "kube-scheduler-addons-386094" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:48:18.873967   15929 pod_ready.go:40] duration metric: took 1.603576966s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:48:18.916879   15929 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1121 23:48:18.918309   15929 out.go:179] * Done! kubectl is now configured to use "addons-386094" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:48:17 addons-386094 crio[773]: time="2025-11-21T23:48:17.126949863Z" level=info msg="Deleting pod gcp-auth_gcp-auth-certs-patch-spdqg from CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:48:17 addons-386094 crio[773]: time="2025-11-21T23:48:17.152561186Z" level=info msg="Stopped pod sandbox: 9315d1f95155cf08e5d7efcd6e99acfebeb7a424b316b18372dfbd7551de84c6" id=e41ba81c-a986-4672-b3a6-0ff632b1f5c1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.722927285Z" level=info msg="Running pod sandbox: default/busybox/POD" id=893acc57-e824-42a0-8807-1b2c21c77a18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.722985891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.728716544Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa66f53b81b359c2832ec3157aa0b8ebb9180147369d74dd72be982526c35cbb UID:36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1 NetNS:/var/run/netns/b6d3eff0-6655-4bd3-84bf-224bdca7d09e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008844a0}] Aliases:map[]}"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.728740712Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.737974963Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:aa66f53b81b359c2832ec3157aa0b8ebb9180147369d74dd72be982526c35cbb UID:36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1 NetNS:/var/run/netns/b6d3eff0-6655-4bd3-84bf-224bdca7d09e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008844a0}] Aliases:map[]}"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.738163067Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.739195778Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.740440119Z" level=info msg="Ran pod sandbox aa66f53b81b359c2832ec3157aa0b8ebb9180147369d74dd72be982526c35cbb with infra container: default/busybox/POD" id=893acc57-e824-42a0-8807-1b2c21c77a18 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.741600311Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=63de04fe-3da2-4f7c-8873-264fbceaf34e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.741748186Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=63de04fe-3da2-4f7c-8873-264fbceaf34e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.741794359Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=63de04fe-3da2-4f7c-8873-264fbceaf34e name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.742406591Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b4396dc-a091-4393-8915-a71772adb284 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:48:19 addons-386094 crio[773]: time="2025-11-21T23:48:19.743697031Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.350740521Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3b4396dc-a091-4393-8915-a71772adb284 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.351194411Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93505ac7-9cb9-4347-991c-6d8781ba8576 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.352328941Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26d7b902-10ab-4720-8501-ec0f1340241c name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.355572214Z" level=info msg="Creating container: default/busybox/busybox" id=6b456801-be71-42b2-8e0e-df925788de05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.355674437Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.360463051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.360861768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.392691694Z" level=info msg="Created container ad7d4826038c19453f794d260208d8f1ae14386287c718450eccb7b2d112cf9b: default/busybox/busybox" id=6b456801-be71-42b2-8e0e-df925788de05 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.393184307Z" level=info msg="Starting container: ad7d4826038c19453f794d260208d8f1ae14386287c718450eccb7b2d112cf9b" id=47a728b3-716f-4f72-ba10-1f71a69c94dc name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:48:20 addons-386094 crio[773]: time="2025-11-21T23:48:20.394715896Z" level=info msg="Started container" PID=6298 containerID=ad7d4826038c19453f794d260208d8f1ae14386287c718450eccb7b2d112cf9b description=default/busybox/busybox id=47a728b3-716f-4f72-ba10-1f71a69c94dc name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa66f53b81b359c2832ec3157aa0b8ebb9180147369d74dd72be982526c35cbb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ad7d4826038c1       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   aa66f53b81b35       busybox                                    default
	f0902a7fbcf03       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 11 seconds ago       Running             gcp-auth                                 0                   33d9f6d328500       gcp-auth-78565c9fb4-rld7n                  gcp-auth
	e2676dd33c7c0       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             12 seconds ago       Exited              patch                                    2                   9315d1f95155c       gcp-auth-certs-patch-spdqg                 gcp-auth
	4c40091f92b42       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             12 seconds ago       Running             controller                               0                   b9426c32aee24       ingress-nginx-controller-6c8bf45fb-bm7tc   ingress-nginx
	075e2ddbfd1b3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          16 seconds ago       Running             csi-snapshotter                          0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	7dcc52ab64881       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          17 seconds ago       Running             csi-provisioner                          0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	28017a975316b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            18 seconds ago       Running             liveness-probe                           0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	af7d23a1702df       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           19 seconds ago       Running             hostpath                                 0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	c18a6d90e25b7       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            20 seconds ago       Running             gadget                                   0                   635a014a6c4e5       gadget-pjh9l                               gadget
	eeac962e2c5b5       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             20 seconds ago       Exited              patch                                    2                   32542f52a6f17       ingress-nginx-admission-patch-tztpx        ingress-nginx
	b0ebc6a2643ce       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                22 seconds ago       Running             node-driver-registrar                    0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	08636839ef014       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   6de26d98e1adc       registry-proxy-7jwr9                       kube-system
	d07326bff3cea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   24 seconds ago       Exited              create                                   0                   a57ee390ec71c       gcp-auth-certs-create-l42s2                gcp-auth
	daba4d9b267e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   24 seconds ago       Exited              create                                   0                   2140b22f1628f       ingress-nginx-admission-create-8z425       ingress-nginx
	2cad693d643e2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   1eb615ffd7084       amd-gpu-device-plugin-rjdxd                kube-system
	f1ffe717c9acc       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     25 seconds ago       Running             nvidia-device-plugin-ctr                 0                   623eb21922668       nvidia-device-plugin-daemonset-mqmzt       kube-system
	2d8c2a76b689b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   28 seconds ago       Running             csi-external-health-monitor-controller   0                   9c396f7a9b3f2       csi-hostpathplugin-bw962                   kube-system
	fdcdf133e27bc       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   d82e94d7a353d       snapshot-controller-7d9fbc56b8-wknk9       kube-system
	1d39e28b86df0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              30 seconds ago       Running             yakd                                     0                   b7140e30c78e7       yakd-dashboard-5ff678cb9-qqz4m             yakd-dashboard
	dafa66d52ee1e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             32 seconds ago       Running             csi-attacher                             0                   7ee01580ffe38       csi-hostpath-attacher-0                    kube-system
	4188e634536cb       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              33 seconds ago       Running             csi-resizer                              0                   942f98da9ed8b       csi-hostpath-resizer-0                     kube-system
	9b71739f67786       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      34 seconds ago       Running             volume-snapshot-controller               0                   fe6f9c5d7083c       snapshot-controller-7d9fbc56b8-mfq9f       kube-system
	d5d12cca9e0c9       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           36 seconds ago       Running             registry                                 0                   11e3e9ce8d611       registry-6b586f9694-sgqmn                  kube-system
	d0c5a0bacbbac       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               38 seconds ago       Running             minikube-ingress-dns                     0                   8a1f09fe5a9a8       kube-ingress-dns-minikube                  kube-system
	9ad0bc2610d4a       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        43 seconds ago       Running             metrics-server                           0                   91edf4cb58d2b       metrics-server-85b7d694d7-jj26h            kube-system
	3c127b7d8aab5       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               44 seconds ago       Running             cloud-spanner-emulator                   0                   961289b1b8601       cloud-spanner-emulator-6f9fcf858b-wrw5n    default
	e41b861c11004       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             46 seconds ago       Running             local-path-provisioner                   0                   76e2acb10c471       local-path-provisioner-648f6765c9-l5l4d    local-path-storage
	a72f974d29159       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             47 seconds ago       Running             storage-provisioner                      0                   4cf33b190a059       storage-provisioner                        kube-system
	e22fb0f003be5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             47 seconds ago       Running             coredns                                  0                   9175c2043e99b       coredns-66bc5c9577-jdqrr                   kube-system
	e2527b33e3e0a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   6501523b079a1       kindnet-nhwtc                              kube-system
	8f7137d6b0740       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   92f5a912e6393       kube-proxy-bqrb5                           kube-system
	343a757d13fed       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   ae4757b3cb885       kube-scheduler-addons-386094               kube-system
	07338917ef004       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   a31bf4d5d7491       kube-controller-manager-addons-386094      kube-system
	b69d049422e96       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   f3bbc4a677472       kube-apiserver-addons-386094               kube-system
	d75ed4c5a71d1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   bdc86dbfd85b0       etcd-addons-386094                         kube-system
	
	
	==> coredns [e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9] <==
	[INFO] 10.244.0.12:50880 - 22904 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002691384s
	[INFO] 10.244.0.12:58873 - 30716 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000080829s
	[INFO] 10.244.0.12:58873 - 30396 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000118077s
	[INFO] 10.244.0.12:45955 - 14628 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000075472s
	[INFO] 10.244.0.12:45955 - 14878 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000112663s
	[INFO] 10.244.0.12:34049 - 65469 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000057727s
	[INFO] 10.244.0.12:34049 - 65045 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000096039s
	[INFO] 10.244.0.12:42437 - 3076 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080594s
	[INFO] 10.244.0.12:42437 - 3470 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132208s
	[INFO] 10.244.0.22:54817 - 28583 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000181994s
	[INFO] 10.244.0.22:45144 - 14278 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000274364s
	[INFO] 10.244.0.22:42106 - 34246 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133063s
	[INFO] 10.244.0.22:60028 - 35045 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000189552s
	[INFO] 10.244.0.22:50855 - 60152 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090056s
	[INFO] 10.244.0.22:33492 - 5266 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120188s
	[INFO] 10.244.0.22:38948 - 21990 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002717516s
	[INFO] 10.244.0.22:52540 - 15159 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005897475s
	[INFO] 10.244.0.22:57552 - 24993 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004552869s
	[INFO] 10.244.0.22:33709 - 31346 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004984531s
	[INFO] 10.244.0.22:59721 - 23945 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005594945s
	[INFO] 10.244.0.22:51178 - 21357 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00583659s
	[INFO] 10.244.0.22:39969 - 24978 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005104809s
	[INFO] 10.244.0.22:60414 - 32222 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005827926s
	[INFO] 10.244.0.22:35380 - 34860 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000881904s
	[INFO] 10.244.0.22:45589 - 15145 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001007106s
	
	
	==> describe nodes <==
	Name:               addons-386094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-386094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-386094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_46_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-386094
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-386094"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:46:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-386094
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:48:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:48:24 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:48:24 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:48:24 +0000   Fri, 21 Nov 2025 23:46:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:48:24 +0000   Fri, 21 Nov 2025 23:47:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-386094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                3e3b5b98-949e-4931-ada6-ea20a7cfd370
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-6f9fcf858b-wrw5n     0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-pjh9l                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gcp-auth                    gcp-auth-78565c9fb4-rld7n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-bm7tc    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         89s
	  kube-system                 amd-gpu-device-plugin-rjdxd                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-66bc5c9577-jdqrr                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-bw962                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-386094                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-nhwtc                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-addons-386094                250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-addons-386094       200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-bqrb5                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-addons-386094                100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 metrics-server-85b7d694d7-jj26h             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         89s
	  kube-system                 nvidia-device-plugin-daemonset-mqmzt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-6b586f9694-sgqmn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-hvw4s             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-7jwr9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-7d9fbc56b8-mfq9f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-wknk9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-l5l4d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-qqz4m              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 88s   kube-proxy       
	  Normal  Starting                 96s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s   kubelet          Node addons-386094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s   kubelet          Node addons-386094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s   kubelet          Node addons-386094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s   node-controller  Node addons-386094 event: Registered Node addons-386094 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-386094 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001891] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.365618] i8042: Warning: Keylock active
	[  +0.011345] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.476881] block sda: the capability attribute has been deprecated.
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077] <==
	{"level":"warn","ts":"2025-11-21T23:46:49.866663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.875187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.881645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.888232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.894826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.901192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.907160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.912969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.920591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.927168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.934017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.939907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.945570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.969792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.975946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:49.981492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:46:50.028788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:00.847589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.480738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.490890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:47:27.497370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:18.560329Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.335989ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-21T23:48:18.560429Z","caller":"traceutil/trace.go:172","msg":"trace[77590416] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1226; }","duration":"146.4537ms","start":"2025-11-21T23:48:18.413962Z","end":"2025-11-21T23:48:18.560416Z","steps":["trace[77590416] 'range keys from in-memory index tree'  (duration: 146.304225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-21T23:48:18.560769Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.713842ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041477335362828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.187a2a77dded6000\" mod_revision:0 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-certs-patch.187a2a77dded6000\" value_size:570 lease:8128041477335362116 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-21T23:48:18.560874Z","caller":"traceutil/trace.go:172","msg":"trace[126546247] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"177.886979ms","start":"2025-11-21T23:48:18.382966Z","end":"2025-11-21T23:48:18.560853Z","steps":["trace[126546247] 'process raft request'  (duration: 46.708129ms)","trace[126546247] 'compare'  (duration: 130.624811ms)"],"step_count":2}
	
	
	==> gcp-auth [f0902a7fbcf03ea8319a6579ea7fd6ded469a0b438c1636668a12bfa50e182bc] <==
	2025/11/21 23:48:16 GCP Auth Webhook started!
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	2025/11/21 23:48:19 Ready to marshal response ...
	2025/11/21 23:48:19 Ready to write response ...
	
	
	==> kernel <==
	 23:48:28 up 30 min,  0 user,  load average: 1.08, 0.59, 0.24
	Linux addons-386094 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5] <==
	I1121 23:46:59.499315       1 main.go:148] setting mtu 1500 for CNI 
	I1121 23:46:59.499330       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 23:46:59.499350       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T23:46:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 23:46:59.864637       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 23:46:59.868094       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 23:46:59.868122       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 23:46:59.871871       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 23:47:29.792200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 23:47:29.792260       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 23:47:29.793146       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1121 23:47:29.865517       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1121 23:47:31.268722       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 23:47:31.268743       1 metrics.go:72] Registering metrics
	I1121 23:47:31.268796       1 controller.go:711] "Syncing nftables rules"
	I1121 23:47:39.796826       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:47:39.796879       1 main.go:301] handling current node
	I1121 23:47:49.792296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:47:49.792329       1 main.go:301] handling current node
	I1121 23:47:59.791784       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:47:59.791819       1 main.go:301] handling current node
	I1121 23:48:09.791663       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:48:09.791694       1 main.go:301] handling current node
	I1121 23:48:19.791558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:48:19.791597       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c] <==
	W1121 23:47:46.983864       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:46.983939       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:47:46.984163       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.72.114:443: connect: connection refused" logger="UnhandledError"
	E1121 23:47:46.986528       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.72.114:443: connect: connection refused" logger="UnhandledError"
	W1121 23:47:47.984313       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:47.984387       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 23:47:47.984400       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:47:47.984325       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:47.984446       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 23:47:47.985602       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1121 23:47:50.486574       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1121 23:47:51.995711       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:47:51.995769       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:47:51.995796       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.72.114:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1121 23:48:26.557959       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54412: use of closed network connection
	E1121 23:48:26.693629       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54448: use of closed network connection
	
	
	==> kube-controller-manager [07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687] <==
	I1121 23:46:57.453656       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:46:57.453676       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:46:57.453658       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:46:57.454466       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 23:46:57.455999       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 23:46:57.456092       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 23:46:57.457246       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1121 23:46:57.462086       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 23:46:57.464922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:46:57.474645       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:46:57.475743       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 23:46:57.475789       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 23:46:57.475819       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 23:46:57.475830       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 23:46:57.475836       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 23:46:57.481078       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-386094" podCIDRs=["10.244.0.0/24"]
	E1121 23:46:59.596960       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:47:27.468560       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:47:27.468698       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:47:27.468728       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:47:27.480895       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:47:27.484156       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:47:27.569001       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:47:27.585279       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:47:42.461231       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73] <==
	I1121 23:46:59.545431       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:46:59.696530       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:46:59.796677       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:46:59.796708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:46:59.796790       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:46:59.822032       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:46:59.822172       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:46:59.831607       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:46:59.838565       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:46:59.838600       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:46:59.840213       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:46:59.840237       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:46:59.840297       1 config.go:309] "Starting node config controller"
	I1121 23:46:59.840315       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:46:59.840325       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:46:59.840341       1 config.go:200] "Starting service config controller"
	I1121 23:46:59.840355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:46:59.840372       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:46:59.840384       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:46:59.940519       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:46:59.940795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:46:59.940813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc] <==
	E1121 23:46:50.441587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:46:50.441584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:46:50.441706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:46:50.441752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:46:50.441852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:46:50.441948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:46:50.442011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:46:50.442033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:46:50.442258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:46:50.442510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:46:50.442555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:46:50.442594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:46:50.442968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:46:50.442999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:46:50.443633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:46:50.443674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:46:50.444161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:46:51.288023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:46:51.388810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:46:51.417124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:46:51.478639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:46:51.569947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:46:51.587727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:46:51.602651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1121 23:46:53.038590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:48:06 addons-386094 kubelet[1275]: I1121 23:48:06.054408    1275 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7jwr9" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:48:06 addons-386094 kubelet[1275]: I1121 23:48:06.829084    1275 scope.go:117] "RemoveContainer" containerID="553277e14b8ffdb6d128116982f159720ebf7be4985cfc3ec725e199ed5007d7"
	Nov 21 23:48:08 addons-386094 kubelet[1275]: I1121 23:48:08.063366    1275 scope.go:117] "RemoveContainer" containerID="553277e14b8ffdb6d128116982f159720ebf7be4985cfc3ec725e199ed5007d7"
	Nov 21 23:48:08 addons-386094 kubelet[1275]: I1121 23:48:08.085494    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-pjh9l" podStartSLOduration=65.199699665 podStartE2EDuration="1m9.085476073s" podCreationTimestamp="2025-11-21 23:46:59 +0000 UTC" firstStartedPulling="2025-11-21 23:48:04.057169173 +0000 UTC m=+71.303735411" lastFinishedPulling="2025-11-21 23:48:07.942945563 +0000 UTC m=+75.189511819" observedRunningTime="2025-11-21 23:48:08.084700917 +0000 UTC m=+75.331267186" watchObservedRunningTime="2025-11-21 23:48:08.085476073 +0000 UTC m=+75.332042332"
	Nov 21 23:48:09 addons-386094 kubelet[1275]: I1121 23:48:09.262258    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78jj\" (UniqueName: \"kubernetes.io/projected/77799239-41e1-4a40-aca3-46edf05c9931-kube-api-access-n78jj\") pod \"77799239-41e1-4a40-aca3-46edf05c9931\" (UID: \"77799239-41e1-4a40-aca3-46edf05c9931\") "
	Nov 21 23:48:09 addons-386094 kubelet[1275]: I1121 23:48:09.265173    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77799239-41e1-4a40-aca3-46edf05c9931-kube-api-access-n78jj" (OuterVolumeSpecName: "kube-api-access-n78jj") pod "77799239-41e1-4a40-aca3-46edf05c9931" (UID: "77799239-41e1-4a40-aca3-46edf05c9931"). InnerVolumeSpecName "kube-api-access-n78jj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:48:09 addons-386094 kubelet[1275]: I1121 23:48:09.363078    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n78jj\" (UniqueName: \"kubernetes.io/projected/77799239-41e1-4a40-aca3-46edf05c9931-kube-api-access-n78jj\") on node \"addons-386094\" DevicePath \"\""
	Nov 21 23:48:09 addons-386094 kubelet[1275]: I1121 23:48:09.880765    1275 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 21 23:48:09 addons-386094 kubelet[1275]: I1121 23:48:09.880806    1275 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 21 23:48:10 addons-386094 kubelet[1275]: I1121 23:48:10.082503    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32542f52a6f171569fcafea382ed4f36e5183de1ccf21f800854187ec1692eef"
	Nov 21 23:48:11 addons-386094 kubelet[1275]: E1121 23:48:11.884523    1275 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 21 23:48:11 addons-386094 kubelet[1275]: E1121 23:48:11.884603    1275 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8248262-bb58-4830-86c5-7e3da3404d7a-gcr-creds podName:d8248262-bb58-4830-86c5-7e3da3404d7a nodeName:}" failed. No retries permitted until 2025-11-21 23:48:43.884586332 +0000 UTC m=+111.131152581 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/d8248262-bb58-4830-86c5-7e3da3404d7a-gcr-creds") pod "registry-creds-764b6fb674-hvw4s" (UID: "d8248262-bb58-4830-86c5-7e3da3404d7a") : secret "registry-creds-gcr" not found
	Nov 21 23:48:12 addons-386094 kubelet[1275]: I1121 23:48:12.109687    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-bw962" podStartSLOduration=1.348486676 podStartE2EDuration="32.109668742s" podCreationTimestamp="2025-11-21 23:47:40 +0000 UTC" firstStartedPulling="2025-11-21 23:47:40.45354594 +0000 UTC m=+47.700112182" lastFinishedPulling="2025-11-21 23:48:11.21472801 +0000 UTC m=+78.461294248" observedRunningTime="2025-11-21 23:48:12.108711234 +0000 UTC m=+79.355277516" watchObservedRunningTime="2025-11-21 23:48:12.109668742 +0000 UTC m=+79.356235001"
	Nov 21 23:48:15 addons-386094 kubelet[1275]: I1121 23:48:15.829136    1275 scope.go:117] "RemoveContainer" containerID="16e08a0fdb5a1e6e2b0460faa95b74c2908bec7bd11c017e38ed68622c72aca0"
	Nov 21 23:48:16 addons-386094 kubelet[1275]: I1121 23:48:16.117248    1275 scope.go:117] "RemoveContainer" containerID="16e08a0fdb5a1e6e2b0460faa95b74c2908bec7bd11c017e38ed68622c72aca0"
	Nov 21 23:48:17 addons-386094 kubelet[1275]: I1121 23:48:17.136625    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-bm7tc" podStartSLOduration=74.557710811 podStartE2EDuration="1m18.1366065s" podCreationTimestamp="2025-11-21 23:46:59 +0000 UTC" firstStartedPulling="2025-11-21 23:48:12.169834814 +0000 UTC m=+79.416401053" lastFinishedPulling="2025-11-21 23:48:15.748730501 +0000 UTC m=+82.995296742" observedRunningTime="2025-11-21 23:48:16.147305177 +0000 UTC m=+83.393871436" watchObservedRunningTime="2025-11-21 23:48:17.1366065 +0000 UTC m=+84.383172759"
	Nov 21 23:48:17 addons-386094 kubelet[1275]: I1121 23:48:17.137180    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-rld7n" podStartSLOduration=66.421078422 podStartE2EDuration="1m11.13716844s" podCreationTimestamp="2025-11-21 23:47:06 +0000 UTC" firstStartedPulling="2025-11-21 23:48:12.198930058 +0000 UTC m=+79.445496297" lastFinishedPulling="2025-11-21 23:48:16.915020077 +0000 UTC m=+84.161586315" observedRunningTime="2025-11-21 23:48:17.136214253 +0000 UTC m=+84.382780512" watchObservedRunningTime="2025-11-21 23:48:17.13716844 +0000 UTC m=+84.383734699"
	Nov 21 23:48:17 addons-386094 kubelet[1275]: I1121 23:48:17.225948    1275 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8hm5\" (UniqueName: \"kubernetes.io/projected/0876eb89-2101-4803-9594-4283b59ef432-kube-api-access-t8hm5\") pod \"0876eb89-2101-4803-9594-4283b59ef432\" (UID: \"0876eb89-2101-4803-9594-4283b59ef432\") "
	Nov 21 23:48:17 addons-386094 kubelet[1275]: I1121 23:48:17.227929    1275 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0876eb89-2101-4803-9594-4283b59ef432-kube-api-access-t8hm5" (OuterVolumeSpecName: "kube-api-access-t8hm5") pod "0876eb89-2101-4803-9594-4283b59ef432" (UID: "0876eb89-2101-4803-9594-4283b59ef432"). InnerVolumeSpecName "kube-api-access-t8hm5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:48:17 addons-386094 kubelet[1275]: I1121 23:48:17.326845    1275 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t8hm5\" (UniqueName: \"kubernetes.io/projected/0876eb89-2101-4803-9594-4283b59ef432-kube-api-access-t8hm5\") on node \"addons-386094\" DevicePath \"\""
	Nov 21 23:48:18 addons-386094 kubelet[1275]: I1121 23:48:18.132185    1275 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9315d1f95155cf08e5d7efcd6e99acfebeb7a424b316b18372dfbd7551de84c6"
	Nov 21 23:48:19 addons-386094 kubelet[1275]: I1121 23:48:19.443967    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1-gcp-creds\") pod \"busybox\" (UID: \"36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1\") " pod="default/busybox"
	Nov 21 23:48:19 addons-386094 kubelet[1275]: I1121 23:48:19.444024    1275 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hps2j\" (UniqueName: \"kubernetes.io/projected/36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1-kube-api-access-hps2j\") pod \"busybox\" (UID: \"36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1\") " pod="default/busybox"
	Nov 21 23:48:21 addons-386094 kubelet[1275]: I1121 23:48:21.153065    1275 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5432949950000001 podStartE2EDuration="2.153033269s" podCreationTimestamp="2025-11-21 23:48:19 +0000 UTC" firstStartedPulling="2025-11-21 23:48:19.742083601 +0000 UTC m=+86.988649843" lastFinishedPulling="2025-11-21 23:48:20.351821877 +0000 UTC m=+87.598388117" observedRunningTime="2025-11-21 23:48:21.152379516 +0000 UTC m=+88.398945775" watchObservedRunningTime="2025-11-21 23:48:21.153033269 +0000 UTC m=+88.399599524"
	Nov 21 23:48:26 addons-386094 kubelet[1275]: E1121 23:48:26.557893    1275 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58274->127.0.0.1:33265: write tcp 127.0.0.1:58274->127.0.0.1:33265: write: broken pipe
	
	
	==> storage-provisioner [a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd] <==
	W1121 23:48:02.596888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:04.600442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:04.604428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:06.607849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:06.612569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:08.615984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:08.619803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:10.622818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:10.626068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:12.631017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:12.639290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:14.642495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:14.675359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:16.679000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:16.683962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:18.686743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:18.689704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:20.692223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:20.696525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:22.698647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:22.701724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:24.703878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:24.706863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:26.709271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:48:26.712539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-386094 -n addons-386094
helpers_test.go:269: (dbg) Run:  kubectl --context addons-386094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-l42s2 gcp-auth-certs-patch-spdqg ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx registry-creds-764b6fb674-hvw4s
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-386094 describe pod gcp-auth-certs-create-l42s2 gcp-auth-certs-patch-spdqg ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx registry-creds-764b6fb674-hvw4s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-386094 describe pod gcp-auth-certs-create-l42s2 gcp-auth-certs-patch-spdqg ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx registry-creds-764b6fb674-hvw4s: exit status 1 (73.530818ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-l42s2" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-spdqg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-8z425" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-tztpx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hvw4s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-386094 describe pod gcp-auth-certs-create-l42s2 gcp-auth-certs-patch-spdqg ingress-nginx-admission-create-8z425 ingress-nginx-admission-patch-tztpx registry-creds-764b6fb674-hvw4s: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable headlamp --alsologtostderr -v=1: exit status 11 (234.476212ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:29.124907   24949 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:29.125042   24949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:29.125050   24949 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:29.125075   24949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:29.125282   24949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:29.125535   24949 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:29.125857   24949 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:29.125871   24949 addons.go:622] checking whether the cluster is paused
	I1121 23:48:29.125947   24949 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:29.125958   24949 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:29.126368   24949 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:29.146439   24949 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:29.146491   24949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:29.164273   24949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:29.252029   24949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:29.252163   24949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:29.278646   24949 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:29.278663   24949 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:29.278668   24949 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:29.278673   24949 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:29.278678   24949 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:29.278683   24949 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:29.278688   24949 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:29.278693   24949 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:29.278697   24949 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:29.278723   24949 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:29.278735   24949 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:29.278739   24949 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:29.278744   24949 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:29.278748   24949 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:29.278753   24949 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:29.278759   24949 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:29.278762   24949 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:29.278767   24949 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:29.278769   24949 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:29.278772   24949 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:29.278777   24949 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:29.278780   24949 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:29.278783   24949 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:29.278785   24949 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:29.278788   24949 cri.go:89] found id: ""
	I1121 23:48:29.278825   24949 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:29.291567   24949 out.go:203] 
	W1121 23:48:29.292580   24949 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:29.292600   24949 out.go:285] * 
	* 
	W1121 23:48:29.295456   24949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:29.296539   24949 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-wrw5n" [2282fcf1-29be-4cd4-8d2b-5d5c22fb1c19] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002551646s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (239.112727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:38.228559   25449 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:38.228840   25449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:38.228851   25449 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:38.228855   25449 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:38.229011   25449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:38.229282   25449 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:38.229614   25449 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:38.229628   25449 addons.go:622] checking whether the cluster is paused
	I1121 23:48:38.229705   25449 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:38.229717   25449 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:38.230070   25449 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:38.250444   25449 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:38.250511   25449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:38.268161   25449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:38.358447   25449 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:38.358531   25449 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:38.386644   25449 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:38.386675   25449 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:38.386679   25449 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:38.386682   25449 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:38.386686   25449 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:38.386690   25449 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:38.386693   25449 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:38.386695   25449 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:38.386698   25449 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:38.386708   25449 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:38.386711   25449 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:38.386714   25449 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:38.386716   25449 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:38.386719   25449 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:38.386722   25449 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:38.386733   25449 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:38.386740   25449 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:38.386745   25449 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:38.386747   25449 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:38.386750   25449 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:38.386753   25449 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:38.386755   25449 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:38.386758   25449 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:38.386760   25449 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:38.386763   25449 cri.go:89] found id: ""
	I1121 23:48:38.386808   25449 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:38.399538   25449 out.go:203] 
	W1121 23:48:38.400537   25449 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:38.400560   25449 out.go:285] * 
	* 
	W1121 23:48:38.403514   25449 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:38.404617   25449 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-386094 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-386094 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [742a84a7-e6f8-4fdf-a1a6-8b19d5c7af61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [742a84a7-e6f8-4fdf-a1a6-8b19d5c7af61] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [742a84a7-e6f8-4fdf-a1a6-8b19d5c7af61] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002134717s
addons_test.go:967: (dbg) Run:  kubectl --context addons-386094 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 ssh "cat /opt/local-path-provisioner/pvc-60032366-7407-48ad-af71-327345f784b4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-386094 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-386094 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (238.840772ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:44.517404   26889 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:44.517551   26889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.517563   26889 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:44.517568   26889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:44.517789   26889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:44.518045   26889 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:44.518368   26889 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.518387   26889 addons.go:622] checking whether the cluster is paused
	I1121 23:48:44.518487   26889 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:44.518502   26889 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:44.518892   26889 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:44.537910   26889 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:44.537963   26889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:44.556990   26889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:44.647287   26889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:44.647342   26889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:44.676946   26889 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:44.676964   26889 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:44.676970   26889 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:44.676974   26889 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:44.676979   26889 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:44.676983   26889 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:44.676987   26889 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:44.676992   26889 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:44.676996   26889 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:44.677003   26889 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:44.677007   26889 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:44.677012   26889 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:44.677018   26889 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:44.677025   26889 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:44.677031   26889 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:44.677048   26889 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:44.677066   26889 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:44.677072   26889 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:44.677076   26889 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:44.677081   26889 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:44.677090   26889 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:44.677098   26889 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:44.677103   26889 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:44.677111   26889 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:44.677119   26889 cri.go:89] found id: ""
	I1121 23:48:44.677173   26889 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:44.691684   26889 out.go:203] 
	W1121 23:48:44.692756   26889 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:44.692779   26889 out.go:285] * 
	* 
	W1121 23:48:44.696077   26889 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:44.697532   26889 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-mqmzt" [230f0c0c-240c-4820-b379-e5dfea036b89] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003446627s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (228.688527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:32.987025   25113 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:32.987308   25113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:32.987317   25113 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:32.987321   25113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:32.987508   25113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:32.987733   25113 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:32.988048   25113 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:32.988085   25113 addons.go:622] checking whether the cluster is paused
	I1121 23:48:32.988180   25113 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:32.988193   25113 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:32.988560   25113 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:33.006418   25113 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:33.006476   25113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:33.024828   25113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:33.111850   25113 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:33.111908   25113 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:33.138946   25113 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:33.138965   25113 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:33.138968   25113 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:33.138972   25113 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:33.138974   25113 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:33.138977   25113 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:33.138980   25113 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:33.138983   25113 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:33.138985   25113 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:33.138991   25113 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:33.138994   25113 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:33.138997   25113 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:33.138999   25113 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:33.139002   25113 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:33.139005   25113 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:33.139010   25113 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:33.139016   25113 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:33.139019   25113 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:33.139022   25113 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:33.139024   25113 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:33.139027   25113 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:33.139030   25113 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:33.139033   25113 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:33.139035   25113 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:33.139038   25113 cri.go:89] found id: ""
	I1121 23:48:33.139089   25113 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:33.152111   25113 out.go:203] 
	W1121 23:48:33.153185   25113 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:33.153201   25113 out.go:285] * 
	* 
	W1121 23:48:33.156106   25113 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:33.157136   25113 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-qqz4m" [2c0fdbf6-db0d-4be9-bc78-a2bb0333f9f4] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004644614s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable yakd --alsologtostderr -v=1: exit status 11 (240.707094ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:38.278627   25476 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:38.278910   25476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:38.278920   25476 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:38.278924   25476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:38.279207   25476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:38.279514   25476 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:38.279923   25476 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:38.279944   25476 addons.go:622] checking whether the cluster is paused
	I1121 23:48:38.280089   25476 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:38.280105   25476 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:38.280481   25476 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:38.299075   25476 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:38.299131   25476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:38.315947   25476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:38.403870   25476 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:38.403937   25476 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:38.434250   25476 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:38.434295   25476 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:38.434302   25476 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:38.434307   25476 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:38.434311   25476 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:38.434316   25476 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:38.434319   25476 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:38.434322   25476 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:38.434325   25476 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:38.434337   25476 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:38.434343   25476 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:38.434346   25476 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:38.434348   25476 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:38.434352   25476 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:38.434354   25476 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:38.434385   25476 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:38.434391   25476 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:38.434396   25476 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:38.434399   25476 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:38.434401   25476 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:38.434404   25476 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:38.434407   25476 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:38.434410   25476 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:38.434413   25476 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:38.434415   25476 cri.go:89] found id: ""
	I1121 23:48:38.434454   25476 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:38.448021   25476 out.go:203] 
	W1121 23:48:38.449354   25476 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:38.449383   25476 out.go:285] * 
	* 
	W1121 23:48:38.454513   25476 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:38.455841   25476 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-rjdxd" [9868ed15-c63b-4d0b-badf-c7e7b94197c6] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002317948s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-386094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-386094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (227.729196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:48:35.359313   25219 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:48:35.359593   25219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:35.359603   25219 out.go:374] Setting ErrFile to fd 2...
	I1121 23:48:35.359607   25219 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:48:35.359820   25219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:48:35.360154   25219 mustload.go:66] Loading cluster: addons-386094
	I1121 23:48:35.360530   25219 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:35.360547   25219 addons.go:622] checking whether the cluster is paused
	I1121 23:48:35.360646   25219 config.go:182] Loaded profile config "addons-386094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:35.360664   25219 host.go:66] Checking if "addons-386094" exists ...
	I1121 23:48:35.361093   25219 cli_runner.go:164] Run: docker container inspect addons-386094 --format={{.State.Status}}
	I1121 23:48:35.378620   25219 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:35.378676   25219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386094
	I1121 23:48:35.395143   25219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/addons-386094/id_rsa Username:docker}
	I1121 23:48:35.482436   25219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:35.482512   25219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:35.510110   25219 cri.go:89] found id: "075e2ddbfd1b3053a90e5152f5a3c88d7b0f512a81d837cc0eec9e714320db50"
	I1121 23:48:35.510130   25219 cri.go:89] found id: "7dcc52ab64881edaa7603d95b2807743f4ad209f6d38b4cbf9a19086eca32117"
	I1121 23:48:35.510137   25219 cri.go:89] found id: "28017a975316b1551b2d1bc8dc9eca9342d94f3a04393f626159565843f3a4b9"
	I1121 23:48:35.510142   25219 cri.go:89] found id: "af7d23a1702df3cd4a84775a1c1997ae723153a2dae54920ea3e66a13f11402d"
	I1121 23:48:35.510146   25219 cri.go:89] found id: "b0ebc6a2643ce15a93031b7642059742231c8aefb11a96fa36edac0fdf3ed04c"
	I1121 23:48:35.510151   25219 cri.go:89] found id: "08636839ef014bca0593992d4306a8c75b8669bb16e7ab303f060c2b45b61813"
	I1121 23:48:35.510156   25219 cri.go:89] found id: "2cad693d643e270be01d0b986dd241986a30a6e0bf4e8ae3e5709026b2d7ec13"
	I1121 23:48:35.510160   25219 cri.go:89] found id: "f1ffe717c9acc1343be4709c4cc2b9a8d4103a7f7086b39e554752db9dc132c9"
	I1121 23:48:35.510165   25219 cri.go:89] found id: "2d8c2a76b689b579408c5432035569d448c0ccc7ac8e4a34b4e373f1fb8de96c"
	I1121 23:48:35.510179   25219 cri.go:89] found id: "fdcdf133e27bc1deb19395508496fc46e7183d7c82784e36026a37e4616b634c"
	I1121 23:48:35.510189   25219 cri.go:89] found id: "dafa66d52ee1e679c9d8823cf6b0536f31d7e9635c9b865d81c2b0b7824f7f09"
	I1121 23:48:35.510195   25219 cri.go:89] found id: "4188e634536cb2d7ee1210f1ef11fdfc593c5c568f3907a4a9835af3ba695afd"
	I1121 23:48:35.510202   25219 cri.go:89] found id: "9b71739f67786155a593d80c0e276ce41f1a6abb990cea23f25a6c9d884534fb"
	I1121 23:48:35.510207   25219 cri.go:89] found id: "d5d12cca9e0c9e6587e9f82cf3276be135a3248d2f7fd31949fc16b8b7ec6716"
	I1121 23:48:35.510215   25219 cri.go:89] found id: "d0c5a0bacbbac792d7e58a5e5c7bd0fee9db7cec3a448ebe89ea66be1fd493cc"
	I1121 23:48:35.510222   25219 cri.go:89] found id: "9ad0bc2610d4aed59a9924e959ff00b3282c0f84a8053b2c8a24bce05841da6b"
	I1121 23:48:35.510229   25219 cri.go:89] found id: "a72f974d29159737be96b385ecdb775e4812608008c4cffaa78a874f21d0dbfd"
	I1121 23:48:35.510234   25219 cri.go:89] found id: "e22fb0f003be5df198b862027391260b213589b89bd11cbaf227403d428038a9"
	I1121 23:48:35.510237   25219 cri.go:89] found id: "e2527b33e3e0a37254f4ce209f68208b0d6bb19a8e9d5f4c04577a95745c00e5"
	I1121 23:48:35.510239   25219 cri.go:89] found id: "8f7137d6b0740cb066ea7ebf0361dc186434b4e47bdb1c5dc607dabb5bfc4a73"
	I1121 23:48:35.510242   25219 cri.go:89] found id: "343a757d13fed0f3af37f66cd8776c9b1592f0157fd199fc07c446759a8c79dc"
	I1121 23:48:35.510247   25219 cri.go:89] found id: "07338917ef0048828dfbbd1553f9a031c7bd1f4699ffe7fcee081714d70aa687"
	I1121 23:48:35.510250   25219 cri.go:89] found id: "b69d049422e96952e03c6f61a20b8e9158e45490b685ec179d328f0af957256c"
	I1121 23:48:35.510253   25219 cri.go:89] found id: "d75ed4c5a71d1046bfb84c05ee8ef611de253000d3738fddfd1f3e871449f077"
	I1121 23:48:35.510258   25219 cri.go:89] found id: ""
	I1121 23:48:35.510293   25219 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:48:35.523440   25219 out.go:203] 
	W1121 23:48:35.524668   25219 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:48:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:48:35.524688   25219 out.go:285] * 
	* 
	W1121 23:48:35.527801   25219 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:48:35.528956   25219 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-386094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-159819 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-159819 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7fcbt" [1d9f7d1d-074b-42c9-aace-ec9f03351d4c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-159819 -n functional-159819
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-22 00:04:37.067957 +0000 UTC m=+1109.759218568
functional_test.go:1645: (dbg) Run:  kubectl --context functional-159819 describe po hello-node-connect-7d85dfc575-7fcbt -n default
functional_test.go:1645: (dbg) kubectl --context functional-159819 describe po hello-node-connect-7d85dfc575-7fcbt -n default:
Name:             hello-node-connect-7d85dfc575-7fcbt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-159819/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:54:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmgzp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vmgzp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7fcbt to functional-159819
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-159819 logs hello-node-connect-7d85dfc575-7fcbt -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-159819 logs hello-node-connect-7d85dfc575-7fcbt -n default: exit status 1 (61.206306ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7fcbt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-159819 logs hello-node-connect-7d85dfc575-7fcbt -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-159819 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7fcbt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-159819/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:54:36 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmgzp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vmgzp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7fcbt to functional-159819
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-159819 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-159819 logs -l app=hello-node-connect: exit status 1 (63.968308ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7fcbt" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-159819 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-159819 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.249.245
IPs:                      10.96.249.245
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30640/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-159819
helpers_test.go:243: (dbg) docker inspect functional-159819:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb",
	        "Created": "2025-11-21T23:52:26.430801247Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 38367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:52:26.459000348Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb/hosts",
	        "LogPath": "/var/lib/docker/containers/da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb/da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb-json.log",
	        "Name": "/functional-159819",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-159819:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-159819",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "da0df4c59cd45cd1062e4cf4a176299ba34b0e0295f4fa1102a805e62cf2a6cb",
	                "LowerDir": "/var/lib/docker/overlay2/e11c30990169ee9ab3a4cd5d14fc42fc0d98230dce88157caa85639dde0a0fec-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e11c30990169ee9ab3a4cd5d14fc42fc0d98230dce88157caa85639dde0a0fec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e11c30990169ee9ab3a4cd5d14fc42fc0d98230dce88157caa85639dde0a0fec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e11c30990169ee9ab3a4cd5d14fc42fc0d98230dce88157caa85639dde0a0fec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-159819",
	                "Source": "/var/lib/docker/volumes/functional-159819/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-159819",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-159819",
	                "name.minikube.sigs.k8s.io": "functional-159819",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5bb86c669ca58579245066d81e83d5638d8b9d103cb7aafbd4afaa3245f60eff",
	            "SandboxKey": "/var/run/docker/netns/5bb86c669ca5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-159819": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4a3869376f09ae9cf9f62badfa91f867cc8382032da16c61a7529e0eaa79a6b2",
	                    "EndpointID": "bddc57ebd823f8164d3d8691380abcdee57c80fe37b88f3ac575e086f8b00ae4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9e:80:b7:c9:4e:ab",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-159819",
	                        "da0df4c59cd4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-159819 -n functional-159819
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 logs -n 25: (1.26322047s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-159819 image build -t localhost/my-image:functional-159819 testdata/build --alsologtostderr                            │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh stat /mount-9p/created-by-test                                                                              │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh stat /mount-9p/created-by-pod                                                                               │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh sudo umount -f /mount-9p                                                                                    │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ mount   │ -p functional-159819 /tmp/TestFunctionalparallelMountCmdspecific-port1985526241/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ ssh     │ functional-159819 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ image   │ functional-159819 image ls                                                                                                        │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ image   │ functional-159819 image ls --format json --alsologtostderr                                                                        │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ image   │ functional-159819 image ls --format table --alsologtostderr                                                                       │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh -- ls -la /mount-9p                                                                                         │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh sudo umount -f /mount-9p                                                                                    │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ mount   │ -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount2 --alsologtostderr -v=1                │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ mount   │ -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount1 --alsologtostderr -v=1                │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ mount   │ -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount3 --alsologtostderr -v=1                │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ ssh     │ functional-159819 ssh findmnt -T /mount1                                                                                          │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ ssh     │ functional-159819 ssh findmnt -T /mount1                                                                                          │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh findmnt -T /mount2                                                                                          │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ ssh     │ functional-159819 ssh findmnt -T /mount3                                                                                          │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │ 21 Nov 25 23:55 UTC │
	│ mount   │ -p functional-159819 --kill=true                                                                                                  │ functional-159819 │ jenkins │ v1.37.0 │ 21 Nov 25 23:55 UTC │                     │
	│ service │ functional-159819 service list                                                                                                    │ functional-159819 │ jenkins │ v1.37.0 │ 22 Nov 25 00:04 UTC │ 22 Nov 25 00:04 UTC │
	│ service │ functional-159819 service list -o json                                                                                            │ functional-159819 │ jenkins │ v1.37.0 │ 22 Nov 25 00:04 UTC │ 22 Nov 25 00:04 UTC │
	│ service │ functional-159819 service --namespace=default --https --url hello-node                                                            │ functional-159819 │ jenkins │ v1.37.0 │ 22 Nov 25 00:04 UTC │                     │
	│ service │ functional-159819 service hello-node --url --format={{.IP}}                                                                       │ functional-159819 │ jenkins │ v1.37.0 │ 22 Nov 25 00:04 UTC │                     │
	│ service │ functional-159819 service hello-node --url                                                                                        │ functional-159819 │ jenkins │ v1.37.0 │ 22 Nov 25 00:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:54:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:54:58.867309   51639 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:58.867388   51639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:58.867392   51639 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:58.867395   51639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:58.867643   51639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:54:58.868040   51639 out.go:368] Setting JSON to false
	I1121 23:54:58.868936   51639 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2248,"bootTime":1763767051,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:54:58.868995   51639 start.go:143] virtualization: kvm guest
	I1121 23:54:58.870972   51639 out.go:179] * [functional-159819] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1121 23:54:58.872036   51639 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:54:58.872044   51639 notify.go:221] Checking for updates...
	I1121 23:54:58.873991   51639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:54:58.875129   51639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:54:58.876119   51639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:54:58.877105   51639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:54:58.878128   51639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:54:58.879518   51639 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:54:58.880029   51639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:54:58.903237   51639 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:54:58.903332   51639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:54:58.961962   51639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 23:54:58.952264927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:54:58.962113   51639 docker.go:319] overlay module found
	I1121 23:54:58.963680   51639 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 23:54:58.964643   51639 start.go:309] selected driver: docker
	I1121 23:54:58.964661   51639 start.go:930] validating driver "docker" against &{Name:functional-159819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-159819 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:54:58.964767   51639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:54:58.966607   51639 out.go:203] 
	W1121 23:54:58.967707   51639 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 23:54:58.968687   51639 out.go:203] 
	
	
	==> CRI-O <==
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.202564247Z" level=info msg="Removing pod sandbox: 69ea08fa8b612c4a1aef1e601e15f5500813c68b6d467066895df51ada889363" id=f30595b8-d54f-4880-b661-3f5788a7536c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.204716538Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.204802348Z" level=info msg="Removed pod sandbox: 69ea08fa8b612c4a1aef1e601e15f5500813c68b6d467066895df51ada889363" id=f30595b8-d54f-4880-b661-3f5788a7536c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.615107496Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2fe7c8cb-22bc-407c-9b8c-6bf1894abf33 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.61564176Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5edc0bae-c1c7-46dd-99e1-e563290caa67 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.616933296Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=37b5062a-d290-44e8-91ff-0ac25df96315 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.620372347Z" level=info msg="Creating container: default/busybox-mount/mount-munger" id=643d2061-ab1e-4f95-a5ec-63ce76961cbf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.620478292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.624836728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.625269962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.651154207Z" level=info msg="Created container f0ce936a2126c87c7d7f465b5eb580cba30376142c653dc6181f83a7d0d282f8: default/busybox-mount/mount-munger" id=643d2061-ab1e-4f95-a5ec-63ce76961cbf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.651702499Z" level=info msg="Starting container: f0ce936a2126c87c7d7f465b5eb580cba30376142c653dc6181f83a7d0d282f8" id=cdf56404-7979-43cd-b9b0-e1a853fff8e0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:55:05 functional-159819 crio[3591]: time="2025-11-21T23:55:05.653527572Z" level=info msg="Started container" PID=7377 containerID=f0ce936a2126c87c7d7f465b5eb580cba30376142c653dc6181f83a7d0d282f8 description=default/busybox-mount/mount-munger id=cdf56404-7979-43cd-b9b0-e1a853fff8e0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a9f3718078f4b73f45aced3857849c9f12bd2e5d9613e3841319724f7b18963
	Nov 21 23:55:07 functional-159819 crio[3591]: time="2025-11-21T23:55:07.375685491Z" level=info msg="Stopping pod sandbox: 9a9f3718078f4b73f45aced3857849c9f12bd2e5d9613e3841319724f7b18963" id=3c507982-8437-420c-b5a6-d5e128f24428 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:55:07 functional-159819 crio[3591]: time="2025-11-21T23:55:07.375968363Z" level=info msg="Got pod network &{Name:busybox-mount Namespace:default ID:9a9f3718078f4b73f45aced3857849c9f12bd2e5d9613e3841319724f7b18963 UID:d1fccc36-68fc-4ee4-a87f-3cf3e4de1034 NetNS:/var/run/netns/6fef857d-a489-41e7-8047-6ac6b9228161 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a750}] Aliases:map[]}"
	Nov 21 23:55:07 functional-159819 crio[3591]: time="2025-11-21T23:55:07.376152213Z" level=info msg="Deleting pod default_busybox-mount from CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:55:07 functional-159819 crio[3591]: time="2025-11-21T23:55:07.395599128Z" level=info msg="Stopped pod sandbox: 9a9f3718078f4b73f45aced3857849c9f12bd2e5d9613e3841319724f7b18963" id=3c507982-8437-420c-b5a6-d5e128f24428 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:55:15 functional-159819 crio[3591]: time="2025-11-21T23:55:15.174073152Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d1b74821-0b87-47d9-8b82-8239dbeacd67 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:55:17 functional-159819 crio[3591]: time="2025-11-21T23:55:17.173781705Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f0889937-07e9-4128-bacc-7d2a274eef87 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:56:03 functional-159819 crio[3591]: time="2025-11-21T23:56:03.173113345Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=99a9ebb3-6198-4c54-a195-0124a208fcb5 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:56:09 functional-159819 crio[3591]: time="2025-11-21T23:56:09.173461762Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8a5c6489-9c5c-4ac8-8e0d-e141edd3ae01 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:57:27 functional-159819 crio[3591]: time="2025-11-21T23:57:27.173502821Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=77b8f644-01a9-4bf9-a491-67cd87a27572 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:57:30 functional-159819 crio[3591]: time="2025-11-21T23:57:30.172945005Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=75f4b5e8-ce96-46e9-8a23-745c1c3649f0 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:00:13 functional-159819 crio[3591]: time="2025-11-22T00:00:13.173838389Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65b7a107-b4cf-4b5c-9c68-9a6c1e95ca1a name=/runtime.v1.ImageService/PullImage
	Nov 22 00:00:22 functional-159819 crio[3591]: time="2025-11-22T00:00:22.173129793Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e5728b60-3bed-48f9-a3cf-3847b10648e7 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f0ce936a2126c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   9a9f3718078f4       busybox-mount                                default
	a7340efbcac05       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   c69046b57451a       dashboard-metrics-scraper-77bf4d6c4c-4vqjw   kubernetes-dashboard
	135e1ed32497c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   e454d20d1e258       kubernetes-dashboard-855c9754f9-mrd6z        kubernetes-dashboard
	5fb4e566f262b       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   8ec56f19a72bc       sp-pod                                       default
	9c875cf3fdfb1       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   4ebe0cdc27f62       mysql-5bb876957f-9bkv6                       default
	b3fa572900824       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   e5f28ea269528       nginx-svc                                    default
	f695d445edcef       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   940257890ae49       kube-apiserver-functional-159819             kube-system
	16c7fb794e902       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   cab681b148bcd       kube-controller-manager-functional-159819    kube-system
	464b0edce6e2d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        2                   4b62b63aac5ae       etcd-functional-159819                       kube-system
	6cd4b0ffd08c4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Created             etcd                        1                   4b62b63aac5ae       etcd-functional-159819                       kube-system
	a47ae4fabf8d8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Created             kube-apiserver              1                   f4eda03c28939       kube-apiserver-functional-159819             kube-system
	34f2d8301defa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   fa932d0535dd0       kube-proxy-24f8m                             kube-system
	c544185adbd3d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   b3cfa544f76ac       kindnet-w7vhs                                kube-system
	bcbb97bc0afcc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   cab681b148bcd       kube-controller-manager-functional-159819    kube-system
	f21c0b6424246       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   d34b43b060e7f       kube-scheduler-functional-159819             kube-system
	245f36ff2b5ac       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   7f13237b861b3       coredns-66bc5c9577-w2fr2                     kube-system
	ae12676368c0b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   8f4a7a85bb85a       storage-provisioner                          kube-system
	0f3013c3721f4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   7f13237b861b3       coredns-66bc5c9577-w2fr2                     kube-system
	2b838fcf6ad89       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   8f4a7a85bb85a       storage-provisioner                          kube-system
	48cfe9534cbee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   b3cfa544f76ac       kindnet-w7vhs                                kube-system
	c95819004b728       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   fa932d0535dd0       kube-proxy-24f8m                             kube-system
	f13ad9d0f1d08       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   d34b43b060e7f       kube-scheduler-functional-159819             kube-system
	
	
	==> coredns [0f3013c3721f4ca7dfbc0801f8c99b869f8344b483de490fb75fe8bbd5817279] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39332 - 54829 "HINFO IN 6455936446435641953.6837119890003038408. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.115446261s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [245f36ff2b5ac47522ca18d30fc970f4a2101add1e08bde9b53474bfc0c081a3] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50441 - 10811 "HINFO IN 6320927195811438233.8160156114426052040. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.471135419s
	
	
	==> describe nodes <==
	Name:               functional-159819
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-159819
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=functional-159819
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_52_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:52:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-159819
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:04:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:04:09 +0000   Fri, 21 Nov 2025 23:52:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:04:09 +0000   Fri, 21 Nov 2025 23:52:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:04:09 +0000   Fri, 21 Nov 2025 23:52:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:04:09 +0000   Fri, 21 Nov 2025 23:53:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-159819
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                51adb99a-d6e8-41f2-8866-e936f4a82438
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-85wmx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-7fcbt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-9bkv6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m55s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 coredns-66bc5c9577-w2fr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-159819                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-w7vhs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-159819              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-159819     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-24f8m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-159819              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4vqjw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mrd6z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-159819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-159819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-159819 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-159819 event: Registered Node functional-159819 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-159819 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-159819 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-159819 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-159819 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-159819 event: Registered Node functional-159819 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [464b0edce6e2d4e31c3044db72d7d8935cc42365cb1276f675bf5361781b542a] <==
	{"level":"warn","ts":"2025-11-21T23:54:06.558208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.566206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.571833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.577714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.583537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.590241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.598174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.603979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.609592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.615139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.620663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.626207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.631688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.638093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.644673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.650508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.656375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.662178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.678986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.684686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.690477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:54:06.738232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48070","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:04:06.281458Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1141}
	{"level":"info","ts":"2025-11-22T00:04:06.300652Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1141,"took":"18.874731ms","hash":3370813765,"current-db-size-bytes":3522560,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-22T00:04:06.300690Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3370813765,"revision":1141,"compact-revision":-1}
	
	
	==> etcd [6cd4b0ffd08c4d370ce35bd0e0d83af3a9ba74760d42a6682360d07f65d0f3e4] <==
	
	
	==> kernel <==
	 00:04:38 up 47 min,  0 user,  load average: 0.48, 0.24, 0.31
	Linux functional-159819 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [48cfe9534cbeea31dae9ff229be7f8c8368ba5c644c486664c2616c16d5f791e] <==
	I1121 23:52:48.821702       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 23:52:48.821989       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1121 23:52:48.822180       1 main.go:148] setting mtu 1500 for CNI 
	I1121 23:52:48.822201       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 23:52:48.822228       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T23:52:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 23:52:48.931340       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 23:52:48.931361       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 23:52:48.931377       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 23:52:48.931938       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 23:53:18.932561       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 23:53:18.932599       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 23:53:18.932739       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 23:53:18.932736       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1121 23:53:20.531531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 23:53:20.531554       1 metrics.go:72] Registering metrics
	I1121 23:53:20.531594       1 controller.go:711] "Syncing nftables rules"
	I1121 23:53:28.930952       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:28.930985       1 main.go:301] handling current node
	I1121 23:53:38.934131       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:38.934160       1 main.go:301] handling current node
	I1121 23:53:48.933284       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:48.933311       1 main.go:301] handling current node
	
	
	==> kindnet [c544185adbd3d5d2d3ee98056b5481dbc34368ad63bd756d809e72aba026cca9] <==
	I1122 00:02:34.319144       1 main.go:301] handling current node
	I1122 00:02:44.318478       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:02:44.318513       1 main.go:301] handling current node
	I1122 00:02:54.326604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:02:54.326633       1 main.go:301] handling current node
	I1122 00:03:04.320464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:04.320498       1 main.go:301] handling current node
	I1122 00:03:14.318081       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:14.318112       1 main.go:301] handling current node
	I1122 00:03:24.319140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:24.319198       1 main.go:301] handling current node
	I1122 00:03:34.319021       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:34.319093       1 main.go:301] handling current node
	I1122 00:03:44.317919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:44.317952       1 main.go:301] handling current node
	I1122 00:03:54.320529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:03:54.320555       1 main.go:301] handling current node
	I1122 00:04:04.320708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:04:04.320745       1 main.go:301] handling current node
	I1122 00:04:14.318167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:04:14.318197       1 main.go:301] handling current node
	I1122 00:04:24.327189       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:04:24.327227       1 main.go:301] handling current node
	I1122 00:04:34.320935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:04:34.320964       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a47ae4fabf8d822256b092b82f6f19ebe545b3034f55a8872ee2ddf4df58efe5] <==
	
	
	==> kube-apiserver [f695d445edcef463827a689033373f5328c14ca3f093d15790ca1a0c59d1811e] <==
	I1121 23:54:07.209069       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 23:54:08.073496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1121 23:54:08.278650       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1121 23:54:08.279663       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 23:54:08.283496       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 23:54:09.005800       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 23:54:09.088973       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 23:54:09.135712       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 23:54:09.140709       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 23:54:10.910912       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 23:54:25.939998       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.106.211"}
	I1121 23:54:31.046551       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.50.74"}
	I1121 23:54:33.242777       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.214.18"}
	I1121 23:54:36.740290       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.249.245"}
	I1121 23:54:43.957295       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.220.36"}
	E1121 23:54:49.016013       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60972: use of closed network connection
	E1121 23:54:57.074338       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38592: use of closed network connection
	E1121 23:54:57.531551       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38620: use of closed network connection
	E1121 23:54:57.684483       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38630: use of closed network connection
	E1121 23:54:58.654320       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38646: use of closed network connection
	I1121 23:54:59.729315       1 controller.go:667] quota admission added evaluator for: namespaces
	I1121 23:54:59.836793       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.143.120"}
	I1121 23:54:59.849788       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.212.226"}
	E1121 23:55:00.858696       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38708: use of closed network connection
	I1122 00:04:07.105114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [16c7fb794e902cdd7a5d3ac2bda5eed09f22b62098e0aee4c95d8c3cbbfbbd51] <==
	I1121 23:54:10.507910       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 23:54:10.507932       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 23:54:10.507956       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 23:54:10.507993       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 23:54:10.508089       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 23:54:10.508102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 23:54:10.508370       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1121 23:54:10.511253       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1121 23:54:10.511344       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:54:10.512407       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 23:54:10.513535       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:54:10.514627       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 23:54:10.515739       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 23:54:10.515759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:54:10.515791       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 23:54:10.515857       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-159819"
	I1121 23:54:10.515896       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 23:54:10.518170       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1121 23:54:10.519086       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1121 23:54:10.526229       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1121 23:54:59.771132       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 23:54:59.774235       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 23:54:59.780408       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 23:54:59.780468       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1121 23:54:59.784419       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bcbb97bc0afcc5551d387a8bb8ddc58f62658d30034ddeb103f75073076bbdec] <==
	I1121 23:53:54.311326       1 serving.go:386] Generated self-signed cert in-memory
	I1121 23:53:54.688188       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1121 23:53:54.688213       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:53:54.689574       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1121 23:53:54.689579       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1121 23:53:54.689939       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1121 23:53:54.690021       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 23:54:04.691705       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [34f2d8301defac217236c046ee90235c30e5c42448184ab539c1f2ecb6e81b9d] <==
	I1121 23:53:54.046737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1121 23:53:54.047644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-159819&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:53:55.306017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-159819&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:53:57.784101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-159819&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:54:03.658492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-159819&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1121 23:54:14.446888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:54:14.446921       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:54:14.447005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:54:14.464613       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:54:14.464650       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:54:14.469690       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:54:14.470009       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:54:14.470027       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:54:14.471124       1 config.go:200] "Starting service config controller"
	I1121 23:54:14.471147       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:54:14.471150       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:54:14.471160       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:54:14.471113       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:54:14.471173       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:54:14.471198       1 config.go:309] "Starting node config controller"
	I1121 23:54:14.471209       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:54:14.471218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:54:14.571198       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:54:14.571297       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:54:14.571422       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c95819004b72836b5b34822ed06ba03b1a1ff4935e0d9d2a8abdc8be23adc91f] <==
	I1121 23:52:48.651569       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:52:48.723349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:52:48.823724       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:52:48.823772       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:52:48.823925       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:52:48.844741       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:52:48.844809       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:52:48.850447       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:52:48.850862       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:52:48.850895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:52:48.852314       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:52:48.852335       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:52:48.852369       1 config.go:309] "Starting node config controller"
	I1121 23:52:48.852375       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:52:48.852377       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:52:48.852394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:52:48.852497       1 config.go:200] "Starting service config controller"
	I1121 23:52:48.852541       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:52:48.953100       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:52:48.953119       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:52:48.953148       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:52:48.953200       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f13ad9d0f1d08c74ba7812a046ba85e7861a3851bb0eaf0e9a5a038f1125f5e5] <==
	E1121 23:52:40.338636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:52:40.338686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:52:40.338770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:52:40.338776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:52:40.338809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:52:40.338820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:52:40.339262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:52:40.339270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:52:40.339328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:52:40.339351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:52:40.339386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:52:40.339420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:52:41.201026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:52:41.374038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:52:41.414885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:52:41.432764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:52:41.506014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:52:41.529033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1121 23:52:43.235162       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:53:53.097717       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 23:53:53.097717       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 23:53:53.097828       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 23:53:53.097873       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:53:53.097881       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 23:53:53.097955       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f21c0b64242469e85d1eb4d511382be7f5b06091ae35fab06a53422e818febe9] <==
	E1121 23:53:58.506436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:53:58.679357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:53:58.706909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:53:58.768510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:53:58.905096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:54:01.461019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1121 23:54:01.466403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:54:01.548936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:54:01.574306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:54:01.604630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:54:01.770654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:54:01.888380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:54:02.092479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:54:02.387021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:54:02.428567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:54:02.444423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:54:02.624217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:54:03.091324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:54:03.212989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:54:03.541967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:54:03.636476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:54:04.011361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:54:04.167317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:54:04.231909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1121 23:54:08.838101       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:01:48 functional-159819 kubelet[4171]: E1122 00:01:48.172972    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:01:58 functional-159819 kubelet[4171]: E1122 00:01:58.173278    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:02:03 functional-159819 kubelet[4171]: E1122 00:02:03.173228    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:02:13 functional-159819 kubelet[4171]: E1122 00:02:13.173414    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:02:14 functional-159819 kubelet[4171]: E1122 00:02:14.173075    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:02:25 functional-159819 kubelet[4171]: E1122 00:02:25.173846    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:02:27 functional-159819 kubelet[4171]: E1122 00:02:27.172874    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:02:37 functional-159819 kubelet[4171]: E1122 00:02:37.172947    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:02:39 functional-159819 kubelet[4171]: E1122 00:02:39.173415    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:02:51 functional-159819 kubelet[4171]: E1122 00:02:51.172713    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:02:53 functional-159819 kubelet[4171]: E1122 00:02:53.173359    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:03:06 functional-159819 kubelet[4171]: E1122 00:03:06.173482    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:03:08 functional-159819 kubelet[4171]: E1122 00:03:08.173329    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:03:19 functional-159819 kubelet[4171]: E1122 00:03:19.173196    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:03:22 functional-159819 kubelet[4171]: E1122 00:03:22.173254    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:03:31 functional-159819 kubelet[4171]: E1122 00:03:31.173267    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:03:33 functional-159819 kubelet[4171]: E1122 00:03:33.173016    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:03:45 functional-159819 kubelet[4171]: E1122 00:03:45.173735    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:03:47 functional-159819 kubelet[4171]: E1122 00:03:47.172804    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:03:59 functional-159819 kubelet[4171]: E1122 00:03:59.173231    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:04:00 functional-159819 kubelet[4171]: E1122 00:04:00.173793    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:04:13 functional-159819 kubelet[4171]: E1122 00:04:13.172472    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:04:14 functional-159819 kubelet[4171]: E1122 00:04:14.173233    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	Nov 22 00:04:27 functional-159819 kubelet[4171]: E1122 00:04:27.173196    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-85wmx" podUID="32bbf87d-f492-470c-9c21-65edbe5632b5"
	Nov 22 00:04:29 functional-159819 kubelet[4171]: E1122 00:04:29.173335    4171 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7fcbt" podUID="1d9f7d1d-074b-42c9-aace-ec9f03351d4c"
	
	
	==> kubernetes-dashboard [135e1ed32497c53102d2648e18234de2f4c6f28a6f2809b500edd1223c614a35] <==
	2025/11/21 23:55:03 Using namespace: kubernetes-dashboard
	2025/11/21 23:55:03 Using in-cluster config to connect to apiserver
	2025/11/21 23:55:03 Using secret token for csrf signing
	2025/11/21 23:55:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/21 23:55:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/21 23:55:03 Successful initial request to the apiserver, version: v1.34.1
	2025/11/21 23:55:03 Generating JWE encryption key
	2025/11/21 23:55:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/21 23:55:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/21 23:55:03 Initializing JWE encryption key from synchronized object
	2025/11/21 23:55:03 Creating in-cluster Sidecar client
	2025/11/21 23:55:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/21 23:55:03 Serving insecurely on HTTP port: 9090
	2025/11/21 23:55:33 Successful request to sidecar
	2025/11/21 23:55:03 Starting overwatch
	
	
	==> storage-provisioner [2b838fcf6ad897d435bb5b6f02832605463e3cf69c61cc82e98c7d76940a0f90] <==
	W1121 23:53:29.480568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:29.483707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 23:53:29.580073       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-159819_7c4f56eb-b77b-4d36-aabd-2b61334ca14d!
	W1121 23:53:31.486834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:31.490281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:33.493102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:33.496967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:35.500197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:35.503960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:37.506662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:37.510247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:39.513322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:39.517738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:41.521351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:41.525083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:43.527759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:43.532540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:45.535448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:45.540442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:47.542727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:47.546140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:49.549123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:49.552529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:51.555019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:51.558519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ae12676368c0ba7126e6357f03e02e03890f4e380ba1baff4eac33400d57a33b] <==
	W1122 00:04:14.366630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:16.369871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:16.373139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:18.375410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:18.379574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:20.382103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:20.385402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:22.387806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:22.391116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:24.393843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:24.398327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:26.400829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:26.404224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:28.406470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:28.410403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:30.413921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:30.417670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:32.419654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:32.423028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:34.425142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:34.429518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:36.433074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:36.436411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:38.439163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:04:38.443108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-159819 -n functional-159819
helpers_test.go:269: (dbg) Run:  kubectl --context functional-159819 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-85wmx hello-node-connect-7d85dfc575-7fcbt
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-159819 describe pod busybox-mount hello-node-75c85bcc94-85wmx hello-node-connect-7d85dfc575-7fcbt
helpers_test.go:290: (dbg) kubectl --context functional-159819 describe pod busybox-mount hello-node-75c85bcc94-85wmx hello-node-connect-7d85dfc575-7fcbt:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-159819/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 23:55:02 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f0ce936a2126c87c7d7f465b5eb580cba30376142c653dc6181f83a7d0d282f8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 21 Nov 2025 23:55:05 +0000
	      Finished:     Fri, 21 Nov 2025 23:55:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bkt6c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bkt6c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m37s  default-scheduler  Successfully assigned default/busybox-mount to functional-159819
	  Normal  Pulling    9m36s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m34s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 659ms (2.093s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m34s  kubelet            Created container: mount-munger
	  Normal  Started    9m34s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-85wmx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-159819/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 23:54:30 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6n8t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l6n8t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-85wmx to functional-159819
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-7fcbt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-159819/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 23:54:36 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vmgzp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vmgzp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7fcbt to functional-159819
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-159819 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-159819 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-85wmx" [32bbf87d-f492-470c-9c21-65edbe5632b5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-159819 -n functional-159819
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-22 00:04:31.367109033 +0000 UTC m=+1104.058370592
functional_test.go:1460: (dbg) Run:  kubectl --context functional-159819 describe po hello-node-75c85bcc94-85wmx -n default
functional_test.go:1460: (dbg) kubectl --context functional-159819 describe po hello-node-75c85bcc94-85wmx -n default:
Name:             hello-node-75c85bcc94-85wmx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-159819/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:54:30 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l6n8t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l6n8t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-85wmx to functional-159819
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-159819 logs hello-node-75c85bcc94-85wmx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-159819 logs hello-node-75c85bcc94-85wmx -n default: exit status 1 (64.050955ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-85wmx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-159819 logs hello-node-75c85bcc94-85wmx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image load --daemon kicbase/echo-server:functional-159819 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-159819" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image load --daemon kicbase/echo-server:functional-159819 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-159819" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-159819
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image load --daemon kicbase/echo-server:functional-159819 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-159819" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image save kicbase/echo-server:functional-159819 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1121 23:54:36.011958   49361 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:36.012319   49361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:36.012330   49361 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:36.012334   49361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:36.012560   49361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:54:36.013110   49361 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:54:36.013209   49361 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:54:36.013620   49361 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
	I1121 23:54:36.031655   49361 ssh_runner.go:195] Run: systemctl --version
	I1121 23:54:36.031691   49361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
	I1121 23:54:36.048083   49361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
	I1121 23:54:36.135793   49361 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1121 23:54:36.135842   49361 cache_images.go:255] Failed to load cached images for "functional-159819": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1121 23:54:36.135863   49361 cache_images.go:267] failed pushing to: functional-159819

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-159819
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image save --daemon kicbase/echo-server:functional-159819 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-159819
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-159819: exit status 1 (16.999436ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-159819

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-159819

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 service --namespace=default --https --url hello-node: exit status 115 (515.697695ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31146
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-159819 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 service hello-node --url --format={{.IP}}: exit status 115 (512.27327ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-159819 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 service hello-node --url: exit status 115 (517.634068ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31146
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-159819 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31146
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-703168 --output=json --user=testUser
E1122 00:14:58.755611   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-703168 --output=json --user=testUser: exit status 80 (2.443226117s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"90687ee5-52ca-4819-866a-39802aefb823","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-703168 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"faa8a7fa-b40b-4a2f-a61a-31b60fc2bfca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-22T00:15:00Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"e1563264-acd3-4894-8e0a-a952cfcea519","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-703168 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-703168 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-703168 --output=json --user=testUser: exit status 80 (1.878383453s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54be8ccd-4e93-4270-9551-9ab07f375d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-703168 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"aae8d7b2-fb48-4c5f-8c88-b3add8cae932","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-22T00:15:01Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"049fd836-60f7-4e45-8630-8ce63b0cdf7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-703168 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestPause/serial/Pause (7.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-044220 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-044220 --alsologtostderr -v=5: exit status 80 (2.528321947s)

                                                
                                                
-- stdout --
	* Pausing node pause-044220 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:28:25.355041  201289 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:28:25.355423  201289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:25.355454  201289 out.go:374] Setting ErrFile to fd 2...
	I1122 00:28:25.355486  201289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:25.355888  201289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:28:25.356600  201289 out.go:368] Setting JSON to false
	I1122 00:28:25.356685  201289 mustload.go:66] Loading cluster: pause-044220
	I1122 00:28:25.357311  201289 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:25.357915  201289 cli_runner.go:164] Run: docker container inspect pause-044220 --format={{.State.Status}}
	I1122 00:28:25.382394  201289 host.go:66] Checking if "pause-044220" exists ...
	I1122 00:28:25.382760  201289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:28:25.500607  201289 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-22 00:28:25.478505667 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:28:25.501768  201289 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-044220 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:28:25.503766  201289 out.go:179] * Pausing node pause-044220 ... 
	I1122 00:28:25.504852  201289 host.go:66] Checking if "pause-044220" exists ...
	I1122 00:28:25.505366  201289 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:25.505469  201289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:25.543133  201289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:25.658669  201289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:25.690281  201289 pause.go:52] kubelet running: true
	I1122 00:28:25.690359  201289 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:28:25.893474  201289 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:28:25.893569  201289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:28:25.977137  201289 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:25.977224  201289 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:25.977233  201289 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:25.977239  201289 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:25.977243  201289 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:25.977248  201289 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:25.977252  201289 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:25.977256  201289 cri.go:89] found id: ""
	I1122 00:28:25.977329  201289 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:28:25.991597  201289 retry.go:31] will retry after 372.453918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:25Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:28:26.365179  201289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:26.379594  201289 pause.go:52] kubelet running: false
	I1122 00:28:26.379654  201289 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:28:26.504827  201289 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:28:26.504915  201289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:28:26.578008  201289 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:26.578031  201289 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:26.578037  201289 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:26.578042  201289 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:26.578047  201289 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:26.578063  201289 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:26.578067  201289 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:26.578071  201289 cri.go:89] found id: ""
	I1122 00:28:26.578119  201289 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:28:26.590815  201289 retry.go:31] will retry after 194.212205ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:26Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:28:26.785211  201289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:26.803146  201289 pause.go:52] kubelet running: false
	I1122 00:28:26.803229  201289 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:28:26.990341  201289 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:28:26.990446  201289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:28:27.068144  201289 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:27.068176  201289 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:27.068183  201289 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:27.068188  201289 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:27.068193  201289 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:27.068197  201289 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:27.068202  201289 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:27.068206  201289 cri.go:89] found id: ""
	I1122 00:28:27.068254  201289 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:28:27.081773  201289 retry.go:31] will retry after 492.286731ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:27Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:28:27.574383  201289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:27.589504  201289 pause.go:52] kubelet running: false
	I1122 00:28:27.589563  201289 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:28:27.714786  201289 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:28:27.714862  201289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:28:27.781993  201289 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:27.782021  201289 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:27.782025  201289 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:27.782028  201289 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:27.782031  201289 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:27.782034  201289 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:27.782037  201289 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:27.782039  201289 cri.go:89] found id: ""
	I1122 00:28:27.782097  201289 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:28:27.795118  201289 out.go:203] 
	W1122 00:28:27.796356  201289 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:28:27.796371  201289 out.go:285] * 
	* 
	W1122 00:28:27.802721  201289 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:28:27.803924  201289 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-044220 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-044220
helpers_test.go:243: (dbg) docker inspect pause-044220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440",
	        "Created": "2025-11-22T00:27:23.717910492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186483,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:27:23.763884218Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/hostname",
	        "HostsPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/hosts",
	        "LogPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440-json.log",
	        "Name": "/pause-044220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-044220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-044220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440",
	                "LowerDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-044220",
	                "Source": "/var/lib/docker/volumes/pause-044220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-044220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-044220",
	                "name.minikube.sigs.k8s.io": "pause-044220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "767d608a3a6d89c165f823ffe414ded3d9c14f0a4cc7603ea37b610fd262784c",
	            "SandboxKey": "/var/run/docker/netns/767d608a3a6d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-044220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf1125a89f944b928e1da3985e14afd6320515efbd15f4b428e9b91fbf80e100",
	                    "EndpointID": "9974f353b9f4169828348be36f142228506f262548888847d789bffed4a920ab",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:51:fa:f3:2f:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-044220",
	                        "02e81c392454"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-044220 -n pause-044220
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-044220 -n pause-044220: exit status 2 (359.333546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-044220 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-044220 logs -n 25: (1.170913494s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-417192                                                                                            │ test-preload-417192         │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ start   │ -p scheduled-stop-366786 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --cancel-scheduled                                                                       │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │ 22 Nov 25 00:26 UTC │
	│ delete  │ -p scheduled-stop-366786                                                                                          │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │ 22 Nov 25 00:26 UTC │
	│ start   │ -p insufficient-storage-310459 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-310459 │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ delete  │ -p insufficient-storage-310459                                                                                    │ insufficient-storage-310459 │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p pause-044220 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p force-systemd-env-087837 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-087837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p offline-crio-033967 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-033967         │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-220412 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-220412      │ jenkins │ v1.32.0 │ 22 Nov 25 00:27 UTC │                     │
	│ delete  │ -p force-systemd-env-087837                                                                                       │ force-systemd-env-087837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p running-upgrade-670577 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-670577      │ jenkins │ v1.32.0 │ 22 Nov 25 00:27 UTC │                     │
	│ start   │ -p pause-044220 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │ 22 Nov 25 00:28 UTC │
	│ delete  │ -p offline-crio-033967                                                                                            │ offline-crio-033967         │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio            │ cert-expiration-624739      │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │                     │
	│ pause   │ -p pause-044220 --alsologtostderr -v=5                                                                            │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:28:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:28:07.871356  194936 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:28:07.871463  194936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:07.871466  194936 out.go:374] Setting ErrFile to fd 2...
	I1122 00:28:07.871469  194936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:07.871732  194936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:28:07.872190  194936 out.go:368] Setting JSON to false
	I1122 00:28:07.873266  194936 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4237,"bootTime":1763767051,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:28:07.873324  194936 start.go:143] virtualization: kvm guest
	I1122 00:28:07.878483  194936 out.go:179] * [cert-expiration-624739] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:28:07.879684  194936 notify.go:221] Checking for updates...
	I1122 00:28:07.879709  194936 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:28:07.880856  194936 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:28:07.882024  194936 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:28:07.883236  194936 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:28:07.884249  194936 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:28:07.886651  194936 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:28:07.888966  194936 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:07.889120  194936 config.go:182] Loaded profile config "running-upgrade-670577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:07.889247  194936 config.go:182] Loaded profile config "stopped-upgrade-220412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:07.889377  194936 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:28:07.929222  194936 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:28:07.929354  194936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:28:08.002702  194936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-22 00:28:07.989739604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:28:08.002842  194936 docker.go:319] overlay module found
	I1122 00:28:08.008483  194936 out.go:179] * Using the docker driver based on user configuration
	I1122 00:28:08.009557  194936 start.go:309] selected driver: docker
	I1122 00:28:08.009566  194936 start.go:930] validating driver "docker" against <nil>
	I1122 00:28:08.009580  194936 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:28:08.010399  194936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:28:08.081088  194936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-22 00:28:08.0696759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:28:08.081244  194936 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:28:08.081432  194936 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1122 00:28:08.082888  194936 out.go:179] * Using Docker driver with root privileges
	I1122 00:28:08.084061  194936 cni.go:84] Creating CNI manager for ""
	I1122 00:28:08.084137  194936 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:08.084147  194936 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:28:08.084244  194936 start.go:353] cluster config:
	{Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:08.085540  194936 out.go:179] * Starting "cert-expiration-624739" primary control-plane node in "cert-expiration-624739" cluster
	I1122 00:28:08.086602  194936 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:28:08.087663  194936 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:28:08.088687  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:08.088767  194936 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:28:08.088822  194936 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:28:08.088834  194936 cache.go:65] Caching tarball of preloaded images
	I1122 00:28:08.088941  194936 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:28:08.088949  194936 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:28:08.089095  194936 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json ...
	I1122 00:28:08.089118  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json: {Name:mk6ad289b63cf9798b64fb02b5d9656644a7d337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:08.121701  194936 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:28:08.121715  194936 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:28:08.121733  194936 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:28:08.121768  194936 start.go:360] acquireMachinesLock for cert-expiration-624739: {Name:mk3e7a6e0a4875a636ffa6046666b41f1179e198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:28:08.121867  194936 start.go:364] duration metric: took 82.015µs to acquireMachinesLock for "cert-expiration-624739"
	I1122 00:28:08.121890  194936 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:28:08.121970  194936 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:28:03.695928  193714 out.go:252] * Updating the running docker "pause-044220" container ...
	I1122 00:28:03.695972  193714 machine.go:94] provisionDockerMachine start ...
	I1122 00:28:03.696029  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:03.715429  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:03.715789  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:03.715807  193714 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:28:03.838121  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-044220
	
	I1122 00:28:03.838153  193714 ubuntu.go:182] provisioning hostname "pause-044220"
	I1122 00:28:03.838217  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:03.855437  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:03.855689  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:03.855703  193714 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-044220 && echo "pause-044220" | sudo tee /etc/hostname
	I1122 00:28:03.983173  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-044220
	
	I1122 00:28:03.983234  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.002791  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:04.003117  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:04.003146  193714 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-044220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-044220/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-044220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:04.122959  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:04.122986  193714 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:04.123035  193714 ubuntu.go:190] setting up certificates
	I1122 00:28:04.123074  193714 provision.go:84] configureAuth start
	I1122 00:28:04.123125  193714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-044220
	I1122 00:28:04.140331  193714 provision.go:143] copyHostCerts
	I1122 00:28:04.140379  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:04.140409  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:04.168203  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:04.168365  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:04.168378  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:04.168416  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:04.168473  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:04.168481  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:04.168509  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:04.168561  193714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.pause-044220 san=[127.0.0.1 192.168.76.2 localhost minikube pause-044220]
	I1122 00:28:04.231530  193714 provision.go:177] copyRemoteCerts
	I1122 00:28:04.231595  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:04.231648  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.249703  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:04.340957  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:04.364148  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:28:04.384358  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:28:04.403456  193714 provision.go:87] duration metric: took 280.369866ms to configureAuth
	I1122 00:28:04.403478  193714 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:28:04.403705  193714 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:04.403819  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.421377  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:04.421670  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:04.421696  193714 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:07.077834  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:07.077867  193714 machine.go:97] duration metric: took 3.381884648s to provisionDockerMachine
	I1122 00:28:07.077882  193714 start.go:293] postStartSetup for "pause-044220" (driver="docker")
	I1122 00:28:07.077895  193714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:07.077961  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:07.078017  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.099184  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.210843  193714 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:07.215254  193714 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:07.215289  193714 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:28:07.215303  193714 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:07.215374  193714 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:07.215488  193714 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:07.215632  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:07.229533  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:07.293301  193714 start.go:296] duration metric: took 215.40153ms for postStartSetup
	I1122 00:28:07.293396  193714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:07.293438  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.318147  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.408462  193714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:07.413628  193714 fix.go:56] duration metric: took 3.741934847s for fixHost
	I1122 00:28:07.413655  193714 start.go:83] releasing machines lock for "pause-044220", held for 3.741989246s
	I1122 00:28:07.413729  193714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-044220
	I1122 00:28:07.432704  193714 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:07.432763  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.432775  193714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:07.432845  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.450678  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.451604  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.604711  193714 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:07.613595  193714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:07.655707  193714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:28:07.660705  193714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:28:07.660774  193714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:07.669865  193714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:28:07.669891  193714 start.go:496] detecting cgroup driver to use...
	I1122 00:28:07.669920  193714 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:28:07.669963  193714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:07.684801  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:07.701218  193714 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:28:07.701369  193714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:07.721117  193714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:07.739696  193714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:07.873936  193714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:08.030151  193714 docker.go:234] disabling docker service ...
	I1122 00:28:08.030225  193714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:08.051961  193714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:08.070245  193714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:08.220630  193714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:08.376086  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:08.388960  193714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:08.410567  193714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:28:08.410649  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:07.296467  193202 cli_runner.go:217] Completed: docker run --rm --name running-upgrade-670577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-670577 --entrypoint /usr/bin/test -v running-upgrade-670577:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (9.507077812s)
	I1122 00:28:07.296496  193202 oci.go:107] Successfully prepared a docker volume running-upgrade-670577
	I1122 00:28:07.296532  193202 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:07.296563  193202 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:07.296624  193202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-670577:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:07.586366  185312 cli_runner.go:217] Completed: docker run --rm --name stopped-upgrade-220412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-220412 --entrypoint /usr/bin/test -v stopped-upgrade-220412:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (17.681611163s)
	I1122 00:28:07.586393  185312 oci.go:107] Successfully prepared a docker volume stopped-upgrade-220412
	I1122 00:28:07.586414  185312 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:07.586447  185312 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:07.586531  185312 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-220412:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:08.127776  194936 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:28:08.128075  194936 start.go:159] libmachine.API.Create for "cert-expiration-624739" (driver="docker")
	I1122 00:28:08.128110  194936 client.go:173] LocalClient.Create starting
	I1122 00:28:08.128189  194936 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:28:08.128224  194936 main.go:143] libmachine: Decoding PEM data...
	I1122 00:28:08.128249  194936 main.go:143] libmachine: Parsing certificate...
	I1122 00:28:08.128311  194936 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:28:08.128336  194936 main.go:143] libmachine: Decoding PEM data...
	I1122 00:28:08.128354  194936 main.go:143] libmachine: Parsing certificate...
	I1122 00:28:08.128826  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:28:08.148910  194936 cli_runner.go:211] docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:28:08.148974  194936 network_create.go:284] running [docker network inspect cert-expiration-624739] to gather additional debugging logs...
	I1122 00:28:08.148987  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739
	W1122 00:28:08.168253  194936 cli_runner.go:211] docker network inspect cert-expiration-624739 returned with exit code 1
	I1122 00:28:08.168291  194936 network_create.go:287] error running [docker network inspect cert-expiration-624739]: docker network inspect cert-expiration-624739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-624739 not found
	I1122 00:28:08.168304  194936 network_create.go:289] output of [docker network inspect cert-expiration-624739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-624739 not found
	
	** /stderr **
	I1122 00:28:08.168454  194936 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:08.192537  194936 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:28:08.193399  194936 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:28:08.194191  194936 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:28:08.195106  194936 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cf1125a89f94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:f2:07:fa:fd:c9} reservation:<nil>}
	I1122 00:28:08.196066  194936 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-77229b827ce8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:9b:8d:69:33:c2} reservation:<nil>}
	I1122 00:28:08.197274  194936 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5c80}
	I1122 00:28:08.197302  194936 network_create.go:124] attempt to create docker network cert-expiration-624739 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:28:08.197376  194936 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-624739 cert-expiration-624739
	I1122 00:28:08.255249  194936 network_create.go:108] docker network cert-expiration-624739 192.168.94.0/24 created
	I1122 00:28:08.255272  194936 kic.go:121] calculated static IP "192.168.94.2" for the "cert-expiration-624739" container
	I1122 00:28:08.255350  194936 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:28:08.280175  194936 cli_runner.go:164] Run: docker volume create cert-expiration-624739 --label name.minikube.sigs.k8s.io=cert-expiration-624739 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:28:08.299377  194936 oci.go:103] Successfully created a docker volume cert-expiration-624739
	I1122 00:28:08.299467  194936 cli_runner.go:164] Run: docker run --rm --name cert-expiration-624739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --entrypoint /usr/bin/test -v cert-expiration-624739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:28:08.474991  193714 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:08.475108  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.533608  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.597134  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.660414  193714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:08.669242  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.719810  193714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.729513  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.774231  193714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:08.782896  193714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:08.790886  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:08.908454  193714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:13.804939  193714 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.896446263s)
	I1122 00:28:13.804970  193714 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:13.805020  193714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:13.809713  193714 start.go:564] Will wait 60s for crictl version
	I1122 00:28:13.809770  193714 ssh_runner.go:195] Run: which crictl
	I1122 00:28:13.813742  193714 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:28:13.848581  193714 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:28:13.848656  193714 ssh_runner.go:195] Run: crio --version
	I1122 00:28:13.886425  193714 ssh_runner.go:195] Run: crio --version
	I1122 00:28:13.923660  193714 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:28:13.717178  193202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-670577:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.420506165s)
	I1122 00:28:13.717209  193202 kic.go:203] duration metric: took 6.420644 seconds to extract preloaded images to volume
	W1122 00:28:13.717320  193202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:13.717370  193202 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:13.717419  193202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:13.797265  193202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-670577 --name running-upgrade-670577 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-670577 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-670577 --network running-upgrade-670577 --ip 192.168.103.2 --volume running-upgrade-670577:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1122 00:28:14.176913  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Running}}
	I1122 00:28:14.201441  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.222423  193202 cli_runner.go:164] Run: docker exec running-upgrade-670577 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:14.271371  193202 oci.go:144] the created container "running-upgrade-670577" has a running status.
	I1122 00:28:14.271413  193202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa...
	I1122 00:28:14.700600  193202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:14.841574  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.866600  193202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:14.866616  193202 kic_runner.go:114] Args: [docker exec --privileged running-upgrade-670577 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:14.943751  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.972997  193202 machine.go:88] provisioning docker machine ...
	I1122 00:28:14.973233  193202 ubuntu.go:169] provisioning hostname "running-upgrade-670577"
	I1122 00:28:14.973330  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:14.999555  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.000405  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.000423  193202 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-670577 && echo "running-upgrade-670577" | sudo tee /etc/hostname
	I1122 00:28:15.153602  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-670577
	
	I1122 00:28:15.153665  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.176308  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.176805  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.176828  193202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-670577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-670577/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-670577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:13.925089  193714 cli_runner.go:164] Run: docker network inspect pause-044220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:13.945208  193714 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:13.950992  193714 kubeadm.go:884] updating cluster {Name:pause-044220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:28:13.951144  193714 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:13.951185  193714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:13.989577  193714 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:13.989603  193714 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:28:13.989672  193714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:14.025650  193714 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:14.025677  193714 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:28:14.025685  193714 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:28:14.025783  193714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-044220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:28:14.025852  193714 ssh_runner.go:195] Run: crio config
	I1122 00:28:14.079865  193714 cni.go:84] Creating CNI manager for ""
	I1122 00:28:14.079909  193714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:14.079937  193714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:14.079977  193714 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-044220 NodeName:pause-044220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:14.080224  193714 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-044220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:14.080307  193714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:28:14.089311  193714 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:28:14.089382  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:14.096844  193714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1122 00:28:14.110234  193714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:14.129462  193714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1122 00:28:14.142629  193714 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:14.146394  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:14.282520  193714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:14.302004  193714 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220 for IP: 192.168.76.2
	I1122 00:28:14.302034  193714 certs.go:195] generating shared ca certs ...
	I1122 00:28:14.302076  193714 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:14.302265  193714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:14.302324  193714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:14.302341  193714 certs.go:257] generating profile certs ...
	I1122 00:28:14.302457  193714 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key
	I1122 00:28:14.302534  193714 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.key.33726e52
	I1122 00:28:14.302585  193714 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.key
	I1122 00:28:14.302814  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:14.302859  193714 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:14.302888  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:14.302924  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:14.302977  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:14.303009  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:14.303182  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:14.304487  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:14.332251  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:14.359280  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:14.382301  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:14.404480  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:28:14.428761  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:28:14.455692  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:14.481039  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:14.507561  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:14.540974  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:14.581587  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:14.644875  193714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:14.701998  193714 ssh_runner.go:195] Run: openssl version
	I1122 00:28:14.709625  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:14.719935  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.724360  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.724411  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.782794  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:14.792934  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:14.807941  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.812619  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.812670  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.848165  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:14.861634  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:14.877357  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.885564  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.885651  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.961694  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:14.974256  193714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:28:14.980013  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:28:15.039772  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:28:15.084548  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:28:15.129169  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:28:15.174191  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:28:15.227674  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:28:15.281616  193714 kubeadm.go:401] StartCluster: {Name:pause-044220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:15.281759  193714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:15.281827  193714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:15.311105  193714 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:15.311127  193714 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:15.311131  193714 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:15.311135  193714 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:15.311138  193714 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:15.311140  193714 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:15.311143  193714 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:15.311146  193714 cri.go:89] found id: ""
	I1122 00:28:15.311183  193714 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:28:15.321740  193714 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:15Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:28:15.321791  193714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:15.329378  193714 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:28:15.329394  193714 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:28:15.329431  193714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:28:15.336165  193714 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:28:15.336650  193714 kubeconfig.go:125] found "pause-044220" server: "https://192.168.76.2:8443"
	I1122 00:28:15.337132  193714 kapi.go:59] client config for pause-044220: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key", CAFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:28:15.337521  193714 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:28:15.337533  193714 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:28:15.337538  193714 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:28:15.337542  193714 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:28:15.337549  193714 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:28:15.338040  193714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:28:15.346890  193714 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:28:15.346922  193714 kubeadm.go:602] duration metric: took 17.52235ms to restartPrimaryControlPlane
	I1122 00:28:15.346932  193714 kubeadm.go:403] duration metric: took 65.330154ms to StartCluster
	I1122 00:28:15.346951  193714 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:15.347027  193714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:28:15.347818  193714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:15.356805  193714 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:28:15.356921  193714 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:28:15.357090  193714 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:15.361380  193714 out.go:179] * Enabled addons: 
	I1122 00:28:15.361423  193714 out.go:179] * Verifying Kubernetes components...
	I1122 00:28:13.724111  185312 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-220412:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.137534951s)
	I1122 00:28:13.724147  185312 kic.go:203] duration metric: took 6.137698 seconds to extract preloaded images to volume
	W1122 00:28:13.724257  185312 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:13.724309  185312 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:13.724355  185312 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:13.797286  185312 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-220412 --name stopped-upgrade-220412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-220412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-220412 --network stopped-upgrade-220412 --ip 192.168.85.2 --volume stopped-upgrade-220412:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1122 00:28:14.297242  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Running}}
	I1122 00:28:14.320343  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:14.343264  185312 cli_runner.go:164] Run: docker exec stopped-upgrade-220412 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:14.404183  185312 oci.go:144] the created container "stopped-upgrade-220412" has a running status.
	I1122 00:28:14.404209  185312 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa...
	I1122 00:28:14.821937  185312 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:14.891727  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:14.927265  185312 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:14.927286  185312 kic_runner.go:114] Args: [docker exec --privileged stopped-upgrade-220412 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:14.995573  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:15.024497  185312 machine.go:88] provisioning docker machine ...
	I1122 00:28:15.024534  185312 ubuntu.go:169] provisioning hostname "stopped-upgrade-220412"
	I1122 00:28:15.024615  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.049103  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.050569  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.050588  185312 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-220412 && echo "stopped-upgrade-220412" | sudo tee /etc/hostname
	I1122 00:28:15.192552  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-220412
	
	I1122 00:28:15.192636  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.215904  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.216480  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.216504  185312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-220412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-220412/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-220412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:15.339919  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:15.339942  185312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:15.339987  185312 ubuntu.go:177] setting up certificates
	I1122 00:28:15.340000  185312 provision.go:83] configureAuth start
	I1122 00:28:15.340048  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:15.359548  185312 provision.go:138] copyHostCerts
	I1122 00:28:15.359602  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:15.359613  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:15.360988  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:15.361135  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:15.361143  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:15.361184  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:15.361265  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:15.361271  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:15.361308  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:15.361368  185312 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-220412 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-220412]
	I1122 00:28:15.584255  185312 provision.go:172] copyRemoteCerts
	I1122 00:28:15.584301  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:15.584336  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.603819  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:15.692739  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:15.718556  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:13.978255  194936 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-624739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --entrypoint /usr/bin/test -v cert-expiration-624739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (5.678745918s)
	I1122 00:28:13.978277  194936 oci.go:107] Successfully prepared a docker volume cert-expiration-624739
	I1122 00:28:13.978328  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:13.978337  194936 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:13.978406  194936 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-624739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:15.363354  193714 addons.go:530] duration metric: took 6.443147ms for enable addons: enabled=[]
	I1122 00:28:15.365108  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:15.489131  193714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:15.504310  193714 node_ready.go:35] waiting up to 6m0s for node "pause-044220" to be "Ready" ...
	W1122 00:28:17.507809  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	I1122 00:28:15.744184  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1122 00:28:15.769206  185312 provision.go:86] duration metric: configureAuth took 429.192966ms
	I1122 00:28:15.769233  185312 ubuntu.go:193] setting minikube options for container-runtime
	I1122 00:28:15.769392  185312 config.go:182] Loaded profile config "stopped-upgrade-220412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:15.769497  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.785955  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.786462  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.786503  185312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:16.013765  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:16.013783  185312 machine.go:91] provisioned docker machine in 989.271734ms
	I1122 00:28:16.013798  185312 client.go:171] LocalClient.Create took 26.261626746s
	I1122 00:28:16.013815  185312 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-220412" took 26.261676679s
	I1122 00:28:16.013822  185312 start.go:300] post-start starting for "stopped-upgrade-220412" (driver="docker")
	I1122 00:28:16.013834  185312 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:16.013885  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:16.013923  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.030620  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.120728  185312 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:16.124364  185312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:16.124397  185312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1122 00:28:16.124410  185312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1122 00:28:16.124418  185312 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1122 00:28:16.124430  185312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:16.124492  185312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:16.124558  185312 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:16.124640  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:16.134632  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:16.160877  185312 start.go:303] post-start completed in 147.041923ms
	I1122 00:28:16.161240  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:16.179977  185312 profile.go:148] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/config.json ...
	I1122 00:28:16.180305  185312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:16.180349  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.196628  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.280065  185312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:16.284350  185312 start.go:128] duration metric: createHost completed in 26.534219712s
	I1122 00:28:16.284369  185312 start.go:83] releasing machines lock for "stopped-upgrade-220412", held for 26.534329344s
	I1122 00:28:16.284435  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:16.303395  185312 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:16.303437  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.303481  185312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:16.303525  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.322685  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.323117  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.501646  185312 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:16.506651  185312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:16.645781  185312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1122 00:28:16.650369  185312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.677683  185312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1122 00:28:16.677755  185312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.717037  185312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1122 00:28:16.717153  185312 start.go:472] detecting cgroup driver to use...
	I1122 00:28:16.717190  185312 detect.go:199] detected "systemd" cgroup driver on host os
	I1122 00:28:16.717246  185312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:16.733092  185312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:16.745174  185312 docker.go:203] disabling cri-docker service (if available) ...
	I1122 00:28:16.745230  185312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:16.761994  185312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:16.776022  185312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:16.850802  185312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:16.938709  185312 docker.go:219] disabling docker service ...
	I1122 00:28:16.938780  185312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:16.956073  185312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:16.967453  185312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:17.052851  185312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:17.239426  185312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:17.250069  185312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:17.265874  185312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:28:17.265921  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.481371  185312 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:17.481442  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.609516  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.738996  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.868563  185312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:17.877749  185312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:17.886330  185312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:17.894986  185312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:17.956091  185312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:18.717486  185312 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:18.717550  185312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:18.721890  185312 start.go:540] Will wait 60s for crictl version
	I1122 00:28:18.721938  185312 ssh_runner.go:195] Run: which crictl
	I1122 00:28:18.725402  185312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 00:28:18.768134  185312 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1122 00:28:18.768210  185312 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.805436  185312 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.843978  185312 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1122 00:28:15.304250  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:15.338106  193202 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:15.338138  193202 ubuntu.go:177] setting up certificates
	I1122 00:28:15.338148  193202 provision.go:83] configureAuth start
	I1122 00:28:15.338303  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:15.357394  193202 provision.go:138] copyHostCerts
	I1122 00:28:15.357462  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:15.357472  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:15.357542  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:15.357636  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:15.357641  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:15.357676  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:15.357750  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:15.357756  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:15.357791  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:15.357854  193202 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-670577 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-670577]
	I1122 00:28:15.481830  193202 provision.go:172] copyRemoteCerts
	I1122 00:28:15.481885  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:15.481930  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.501019  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:15.587767  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:15.617857  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1122 00:28:15.641541  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:15.665449  193202 provision.go:86] duration metric: configureAuth took 327.289941ms
	I1122 00:28:15.665471  193202 ubuntu.go:193] setting minikube options for container-runtime
	I1122 00:28:15.665639  193202 config.go:182] Loaded profile config "running-upgrade-670577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:15.665753  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.685126  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.685677  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.685701  193202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:15.921742  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:15.921762  193202 machine.go:91] provisioned docker machine in 948.751439ms
	I1122 00:28:15.921770  193202 client.go:171] LocalClient.Create took 18.295485742s
	I1122 00:28:15.921788  193202 start.go:167] duration metric: libmachine.API.Create for "running-upgrade-670577" took 18.29553475s
	I1122 00:28:15.921797  193202 start.go:300] post-start starting for "running-upgrade-670577" (driver="docker")
	I1122 00:28:15.921830  193202 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:15.921897  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:15.921931  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.941172  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.034036  193202 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:16.037421  193202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:16.037455  193202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1122 00:28:16.037469  193202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1122 00:28:16.037477  193202 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1122 00:28:16.037488  193202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:16.037541  193202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:16.037659  193202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:16.037794  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:16.047663  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:16.075361  193202 start.go:303] post-start completed in 153.548208ms
	I1122 00:28:16.075717  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:16.094079  193202 profile.go:148] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/config.json ...
	I1122 00:28:16.094307  193202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:16.094349  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.112074  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.196810  193202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:16.201764  193202 start.go:128] duration metric: createHost completed in 18.577925614s
	I1122 00:28:16.201781  193202 start.go:83] releasing machines lock for "running-upgrade-670577", held for 18.578076677s
	I1122 00:28:16.201846  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:16.221146  193202 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:16.221201  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.221218  193202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:16.221284  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.239354  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.241004  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.422397  193202 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:16.426985  193202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:16.569377  193202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1122 00:28:16.573976  193202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.596453  193202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1122 00:28:16.596543  193202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.626448  193202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1122 00:28:16.626464  193202 start.go:472] detecting cgroup driver to use...
	I1122 00:28:16.626491  193202 detect.go:199] detected "systemd" cgroup driver on host os
	I1122 00:28:16.626529  193202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:16.640927  193202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:16.653473  193202 docker.go:203] disabling cri-docker service (if available) ...
	I1122 00:28:16.653509  193202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:16.667453  193202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:16.684318  193202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:16.765141  193202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:16.860190  193202 docker.go:219] disabling docker service ...
	I1122 00:28:16.860271  193202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:16.878796  193202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:16.894745  193202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:16.976935  193202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:17.112718  193202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:17.123772  193202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:17.139685  193202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:28:17.139733  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.221652  193202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:17.221702  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.271317  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.353026  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.480444  193202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:17.490385  193202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:17.498417  193202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:17.506613  193202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:17.621809  193202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:18.718269  193202 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.096428716s)
	I1122 00:28:18.718289  193202 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:18.718338  193202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:18.722652  193202 start.go:540] Will wait 60s for crictl version
	I1122 00:28:18.722699  193202 ssh_runner.go:195] Run: which crictl
	I1122 00:28:18.727317  193202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 00:28:18.768419  193202 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1122 00:28:18.768499  193202 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.807650  193202 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.851617  193202 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1122 00:28:18.853115  193202 cli_runner.go:164] Run: docker network inspect running-upgrade-670577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:18.874951  193202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:18.878607  193202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:18.893506  193202 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:18.893558  193202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.961843  193202 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.961861  193202 crio.go:415] Images already preloaded, skipping extraction
	I1122 00:28:18.961913  193202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:19.003827  193202 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:19.003844  193202 cache_images.go:84] Images are preloaded, skipping loading
	I1122 00:28:19.003918  193202 ssh_runner.go:195] Run: crio config
	I1122 00:28:19.053271  193202 cni.go:84] Creating CNI manager for ""
	I1122 00:28:19.053284  193202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:19.053317  193202 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:19.053341  193202 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-670577 NodeName:running-upgrade-670577 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:19.053513  193202 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-670577"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:19.053604  193202 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=running-upgrade-670577 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-670577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1122 00:28:19.053654  193202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1122 00:28:19.063988  193202 binaries.go:44] Found k8s binaries, skipping transfer
	I1122 00:28:19.064087  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:19.074514  193202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1122 00:28:19.095036  193202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:19.118147  193202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1122 00:28:19.136905  193202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:19.140397  193202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:19.151664  193202 certs.go:56] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577 for IP: 192.168.103.2
	I1122 00:28:19.151689  193202 certs.go:190] acquiring lock for shared ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.151832  193202 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:19.151869  193202 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:19.151912  193202 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key
	I1122 00:28:19.151923  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt with IP's: []
	I1122 00:28:19.217545  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt ...
	I1122 00:28:19.217566  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt: {Name:mk0568809c62747eabcee3b5df3b589cf6fb0169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.217722  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key ...
	I1122 00:28:19.217735  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key: {Name:mk0e843cac96c166ac471d288e9c151cc4549ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.217842  193202 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9
	I1122 00:28:19.217856  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 with IP's: [192.168.103.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1122 00:28:19.526799  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 ...
	I1122 00:28:19.526821  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9: {Name:mkea39b5805b5daad08075e952721792601e3653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.526990  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9 ...
	I1122 00:28:19.527001  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9: {Name:mk8646baab87d81020e45de4236fa34270adfcca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.527100  193202 certs.go:337] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt
	I1122 00:28:19.527184  193202 certs.go:341] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key
	I1122 00:28:19.527254  193202 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key
	I1122 00:28:19.527267  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt with IP's: []
	I1122 00:28:19.684940  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt ...
	I1122 00:28:19.684955  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt: {Name:mk73aa95d4504bbc9fff1c6e3fabc5ce76da1fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.685135  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key ...
	I1122 00:28:19.685147  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key: {Name:mk4e5b535951d5e2d8af70a3478357056fcca5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.685340  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:19.685372  193202 certs.go:433] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:19.685381  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:19.685401  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:19.685426  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:19.685445  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:19.685481  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:19.686202  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1122 00:28:19.713127  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:28:19.736820  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:19.758842  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:19.781168  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:19.804736  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:19.828293  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:19.851526  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:19.874651  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:19.900064  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:19.922429  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:19.945126  193202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:19.962418  193202 ssh_runner.go:195] Run: openssl version
	I1122 00:28:19.967689  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:19.976675  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.980192  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.980246  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.986438  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:19.996497  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:20.006154  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.010141  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.010186  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.017222  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:20.026654  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:20.036525  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.040088  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.040123  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.046813  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:20.056066  193202 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1122 00:28:20.059320  193202 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1122 00:28:20.059371  193202 kubeadm.go:404] StartCluster: {Name:running-upgrade-670577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-670577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 00:28:20.059429  193202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:20.059464  193202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:20.094682  193202 cri.go:89] found id: ""
	I1122 00:28:20.094739  193202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:20.103412  193202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:20.111959  193202 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:20.111992  193202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:20.120800  193202 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:20.120843  193202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:20.167818  193202 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1122 00:28:20.167897  193202 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 00:28:20.206497  193202 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:20.206581  193202 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:20.206642  193202 kubeadm.go:322] OS: Linux
	I1122 00:28:20.206716  193202 kubeadm.go:322] CGROUPS_CPU: enabled
	I1122 00:28:20.206794  193202 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1122 00:28:20.206855  193202 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1122 00:28:20.206913  193202 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1122 00:28:20.206965  193202 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1122 00:28:20.207024  193202 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1122 00:28:20.207141  193202 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1122 00:28:20.207217  193202 kubeadm.go:322] CGROUPS_IO: enabled
	I1122 00:28:20.279422  193202 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:20.279547  193202 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:20.279661  193202 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:28:20.492125  193202 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:18.845089  185312 cli_runner.go:164] Run: docker network inspect stopped-upgrade-220412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:18.869350  185312 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:18.874280  185312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:18.888746  185312 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:18.888792  185312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.954663  185312 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.954682  185312 crio.go:415] Images already preloaded, skipping extraction
	I1122 00:28:18.954742  185312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.991653  185312 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.991671  185312 cache_images.go:84] Images are preloaded, skipping loading
	I1122 00:28:18.991750  185312 ssh_runner.go:195] Run: crio config
	I1122 00:28:19.041882  185312 cni.go:84] Creating CNI manager for ""
	I1122 00:28:19.041901  185312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:19.041927  185312 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:19.041956  185312 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-220412 NodeName:stopped-upgrade-220412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:19.042190  185312 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-220412"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:19.042267  185312 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=stopped-upgrade-220412 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-220412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1122 00:28:19.042331  185312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1122 00:28:19.053512  185312 binaries.go:44] Found k8s binaries, skipping transfer
	I1122 00:28:19.053589  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:19.065212  185312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1122 00:28:19.085573  185312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:19.107228  185312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1122 00:28:19.127869  185312 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:19.131705  185312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:19.143335  185312 certs.go:56] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412 for IP: 192.168.85.2
	I1122 00:28:19.143373  185312 certs.go:190] acquiring lock for shared ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.143515  185312 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:19.143555  185312 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:19.143598  185312 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key
	I1122 00:28:19.143606  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt with IP's: []
	I1122 00:28:19.366878  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt ...
	I1122 00:28:19.366899  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt: {Name:mkc01826b8e32a27122ca93da8cd3c152feb840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.367091  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key ...
	I1122 00:28:19.367109  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key: {Name:mk3a66720974ad7ca5234fbaf14f5e4b5ab3e9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.367248  185312 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c
	I1122 00:28:19.367265  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1122 00:28:19.689412  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c ...
	I1122 00:28:19.689429  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c: {Name:mk040457a1f8472f1aff6b3c60c98a44e3ddbb7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.689565  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c ...
	I1122 00:28:19.689572  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c: {Name:mk5719680b501ff8240186054bcec85fe6401669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.689641  185312 certs.go:337] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt
	I1122 00:28:19.689718  185312 certs.go:341] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key
	I1122 00:28:19.689775  185312 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key
	I1122 00:28:19.689784  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt with IP's: []
	I1122 00:28:19.995979  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt ...
	I1122 00:28:19.995996  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt: {Name:mkae622f1fc37f803c94532aca3594461c7cfc0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.996155  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key ...
	I1122 00:28:19.996165  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key: {Name:mkec79bc8ddf3f4f154d9d1d1d0ce16ee6f442a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.996381  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:19.996413  185312 certs.go:433] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:19.996429  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:19.996469  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:19.996499  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:19.996534  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:19.996598  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:19.997546  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1122 00:28:20.022565  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:28:20.046220  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:20.070274  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:28:20.093694  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:20.117433  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:20.140895  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:20.167281  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:20.192157  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:20.222940  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:20.248844  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:20.275248  185312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:20.294848  185312 ssh_runner.go:195] Run: openssl version
	I1122 00:28:20.301114  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:20.311571  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.314987  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.315040  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.322410  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:20.332162  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:20.341363  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.344573  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.344617  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.351349  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:20.362115  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:20.371887  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.376383  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.376426  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.383952  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:20.394773  185312 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1122 00:28:20.398375  185312 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1122 00:28:20.398427  185312 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-220412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-220412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 00:28:20.398535  185312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:20.398602  185312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:20.436858  185312 cri.go:89] found id: ""
	I1122 00:28:20.436913  185312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:20.447827  185312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:20.457822  185312 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:20.457872  185312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:20.468073  185312 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:20.468116  185312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:20.516590  185312 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1122 00:28:20.516655  185312 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 00:28:20.553599  185312 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:20.553680  185312 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:20.553724  185312 kubeadm.go:322] OS: Linux
	I1122 00:28:20.553829  185312 kubeadm.go:322] CGROUPS_CPU: enabled
	I1122 00:28:20.553900  185312 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1122 00:28:20.553969  185312 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1122 00:28:20.554067  185312 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1122 00:28:20.554141  185312 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1122 00:28:20.554199  185312 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1122 00:28:20.554265  185312 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1122 00:28:20.554320  185312 kubeadm.go:322] CGROUPS_IO: enabled
	I1122 00:28:20.630484  185312 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:20.630623  185312 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:20.630749  185312 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:28:20.840674  185312 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:18.641179  194936 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-624739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.662717409s)
	I1122 00:28:18.641205  194936 kic.go:203] duration metric: took 4.662864572s to extract preloaded images to volume ...
	W1122 00:28:18.641294  194936 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:18.641328  194936 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:18.641371  194936 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:18.704152  194936 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-624739 --name cert-expiration-624739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-624739 --network cert-expiration-624739 --ip 192.168.94.2 --volume cert-expiration-624739:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:28:19.014071  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Running}}
	I1122 00:28:19.034810  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.055668  194936 cli_runner.go:164] Run: docker exec cert-expiration-624739 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:19.108344  194936 oci.go:144] the created container "cert-expiration-624739" has a running status.
	I1122 00:28:19.108367  194936 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa...
	I1122 00:28:19.195829  194936 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:19.221855  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.245113  194936 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:19.245128  194936 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-624739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:19.292506  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.323417  194936 machine.go:94] provisionDockerMachine start ...
	I1122 00:28:19.323535  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.347713  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.348012  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.348025  194936 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:28:19.477870  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-624739
	
	I1122 00:28:19.477889  194936 ubuntu.go:182] provisioning hostname "cert-expiration-624739"
	I1122 00:28:19.477962  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.497665  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.497991  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.498001  194936 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-624739 && echo "cert-expiration-624739" | sudo tee /etc/hostname
	I1122 00:28:19.630923  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-624739
	
	I1122 00:28:19.630984  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.650270  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.650584  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.650605  194936 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-624739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-624739/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-624739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:19.775006  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:19.775034  194936 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:19.775077  194936 ubuntu.go:190] setting up certificates
	I1122 00:28:19.775089  194936 provision.go:84] configureAuth start
	I1122 00:28:19.775153  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:19.795101  194936 provision.go:143] copyHostCerts
	I1122 00:28:19.795155  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:19.795162  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:19.795222  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:19.795305  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:19.795309  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:19.795334  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:19.795395  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:19.795398  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:19.795420  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:19.795483  194936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-624739 san=[127.0.0.1 192.168.94.2 cert-expiration-624739 localhost minikube]
	I1122 00:28:19.813066  194936 provision.go:177] copyRemoteCerts
	I1122 00:28:19.813113  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:19.813146  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.832106  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:19.921489  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:19.939948  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:28:19.956793  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:19.973543  194936 provision.go:87] duration metric: took 198.442382ms to configureAuth
	I1122 00:28:19.973563  194936 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:28:19.973702  194936 config.go:182] Loaded profile config "cert-expiration-624739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:19.973794  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.993193  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.993415  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.993425  194936 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:20.259589  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:20.259612  194936 machine.go:97] duration metric: took 936.1766ms to provisionDockerMachine
	I1122 00:28:20.259623  194936 client.go:176] duration metric: took 12.131508346s to LocalClient.Create
	I1122 00:28:20.259646  194936 start.go:167] duration metric: took 12.131589366s to libmachine.API.Create "cert-expiration-624739"
	I1122 00:28:20.259654  194936 start.go:293] postStartSetup for "cert-expiration-624739" (driver="docker")
	I1122 00:28:20.259665  194936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:20.259733  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:20.259777  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.278425  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.371432  194936 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:20.375627  194936 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:20.375666  194936 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:28:20.375677  194936 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:20.375722  194936 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:20.375785  194936 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:20.375860  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:20.383761  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:20.405391  194936 start.go:296] duration metric: took 145.723028ms for postStartSetup
	I1122 00:28:20.405695  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:20.424915  194936 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json ...
	I1122 00:28:20.425251  194936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:20.425304  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.446515  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.535850  194936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:20.540871  194936 start.go:128] duration metric: took 12.418886782s to createHost
	I1122 00:28:20.540890  194936 start.go:83] releasing machines lock for "cert-expiration-624739", held for 12.41901585s
	I1122 00:28:20.540955  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:20.561200  194936 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:20.561257  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.561417  194936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:20.561481  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.580460  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.580721  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.730552  194936 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:20.738131  194936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:20.775485  194936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:28:20.780019  194936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:28:20.780098  194936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:20.807306  194936 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:28:20.807327  194936 start.go:496] detecting cgroup driver to use...
	I1122 00:28:20.807372  194936 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:28:20.807423  194936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:20.826174  194936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:20.839027  194936 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:28:20.839093  194936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:20.858108  194936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:20.877974  194936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:20.968403  194936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:21.056158  194936 docker.go:234] disabling docker service ...
	I1122 00:28:21.056229  194936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:21.072744  194936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:21.083886  194936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:21.170481  194936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:21.255944  194936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:21.267070  194936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:21.279921  194936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:28:21.279961  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.288986  194936 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:21.289034  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.296973  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.304624  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.312412  194936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:21.319404  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.326859  194936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.339107  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.346995  194936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:21.353655  194936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:21.360320  194936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:21.440218  194936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:21.585342  194936 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:21.585404  194936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:21.589223  194936 start.go:564] Will wait 60s for crictl version
	I1122 00:28:21.589274  194936 ssh_runner.go:195] Run: which crictl
	I1122 00:28:21.592716  194936 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:28:21.617315  194936 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:28:21.617369  194936 ssh_runner.go:195] Run: crio --version
	I1122 00:28:21.643829  194936 ssh_runner.go:195] Run: crio --version
	I1122 00:28:21.670343  194936 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:28:20.842149  185312 out.go:204]   - Generating certificates and keys ...
	I1122 00:28:20.842289  185312 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 00:28:20.842404  185312 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:20.982146  185312 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:21.088953  185312 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:21.143934  185312 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:21.278379  185312 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:21.410709  185312 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:21.410961  185312 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-220412] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:28:21.464228  185312 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:21.464404  185312 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-220412] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:28:21.633646  185312 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:21.853031  185312 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:21.931523  185312 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1122 00:28:21.931646  185312 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:22.064948  185312 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:22.271503  185312 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:22.390578  185312 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:22.480302  185312 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:22.481342  185312 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:22.485563  185312 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:28:21.671319  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:21.687352  194936 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:21.691636  194936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:21.701450  194936 kubeadm.go:884] updating cluster {Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:28:21.701561  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:21.701600  194936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:21.733923  194936 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:21.733934  194936 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:28:21.733969  194936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:21.757143  194936 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:21.757153  194936 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:28:21.757160  194936 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:28:21.757232  194936 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-624739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:28:21.757282  194936 ssh_runner.go:195] Run: crio config
	I1122 00:28:21.800716  194936 cni.go:84] Creating CNI manager for ""
	I1122 00:28:21.800733  194936 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:21.800752  194936 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:21.800781  194936 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-624739 NodeName:cert-expiration-624739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:21.800978  194936 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-624739"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:21.801036  194936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:28:21.808647  194936 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:28:21.808692  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:21.815866  194936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1122 00:28:21.827900  194936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:21.841922  194936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1122 00:28:21.853838  194936 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:21.857356  194936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:21.866364  194936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:21.947982  194936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:21.972191  194936 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739 for IP: 192.168.94.2
	I1122 00:28:21.972202  194936 certs.go:195] generating shared ca certs ...
	I1122 00:28:21.972221  194936 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:21.972387  194936 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:21.972442  194936 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:21.972458  194936 certs.go:257] generating profile certs ...
	I1122 00:28:21.972525  194936 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key
	I1122 00:28:21.972541  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt with IP's: []
	I1122 00:28:22.041901  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt ...
	I1122 00:28:22.041916  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt: {Name:mk3b7c1e754514b6aa3a7dcb39f458a3b77ce55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.042100  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key ...
	I1122 00:28:22.042112  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key: {Name:mkcf525315559201b56fd3af0512e3f0d2a182ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.042229  194936 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42
	I1122 00:28:22.042241  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:28:22.104922  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 ...
	I1122 00:28:22.104936  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42: {Name:mkb28c8ef76a0486afddef46c0acac97eb13ee5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.105108  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42 ...
	I1122 00:28:22.105120  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42: {Name:mk2afd0a61b72c7ea831f79e1e40034b8cfc73e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.105235  194936 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt
	I1122 00:28:22.105308  194936 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key
	I1122 00:28:22.105356  194936 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key
	I1122 00:28:22.105366  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt with IP's: []
	I1122 00:28:22.288492  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt ...
	I1122 00:28:22.288511  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt: {Name:mke2855da8ddde5c4bd9293af3879dd3cf44e877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.288679  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key ...
	I1122 00:28:22.288692  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key: {Name:mk6108117b9a5009d8a44f656395739341df96ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.288914  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:22.288957  194936 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:22.288966  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:22.289006  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:22.289034  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:22.289075  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:22.289129  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:22.289949  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:22.307500  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:22.323699  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:22.339719  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:22.355795  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:28:22.371557  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:28:22.387505  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:22.403631  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:22.420484  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:22.438075  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:22.453994  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:22.469748  194936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:22.481365  194936 ssh_runner.go:195] Run: openssl version
	I1122 00:28:22.487652  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:22.496233  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.500368  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.500407  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.539696  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:22.547869  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:22.555463  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.558758  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.558799  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.595134  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:22.604226  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:22.612853  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.616304  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.616346  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.656851  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:22.665513  194936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:28:22.669103  194936 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:28:22.669163  194936 kubeadm.go:401] StartCluster: {Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:22.669258  194936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:22.669304  194936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:22.698207  194936 cri.go:89] found id: ""
	I1122 00:28:22.698256  194936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:22.705638  194936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:22.713150  194936 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:22.713189  194936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:22.720573  194936 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:22.720581  194936 kubeadm.go:158] found existing configuration files:
	
	I1122 00:28:22.720620  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:28:22.727530  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:28:22.727584  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:28:22.734196  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:28:22.741485  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:28:22.741522  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:28:22.748038  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:28:22.754923  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:28:22.754960  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:28:22.761616  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:28:22.768521  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:28:22.768563  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:28:22.775163  194936 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:22.812852  194936 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:28:22.812934  194936 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:28:22.834952  194936 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:22.835037  194936 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:22.835102  194936 kubeadm.go:319] OS: Linux
	I1122 00:28:22.835161  194936 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:28:22.835231  194936 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:28:22.835297  194936 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:28:22.835373  194936 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:28:22.835418  194936 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:28:22.835460  194936 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:28:22.835503  194936 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:28:22.835538  194936 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:28:22.892839  194936 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:22.892975  194936 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:22.893129  194936 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:28:22.900285  194936 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:20.493898  193202 out.go:204]   - Generating certificates and keys ...
	I1122 00:28:20.494010  193202 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 00:28:20.494139  193202 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:20.678304  193202 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:20.969681  193202 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:21.066893  193202 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:21.183993  193202 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:21.790721  193202 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:21.790931  193202 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-670577] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:28:21.958958  193202 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:21.959178  193202 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-670577] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:28:22.205352  193202 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:22.488434  193202 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:22.617598  193202 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1122 00:28:22.617713  193202 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:22.907519  193202 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:23.025159  193202 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:23.216350  193202 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:23.406949  193202 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:23.407496  193202 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:23.410752  193202 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1122 00:28:20.007848  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	W1122 00:28:22.507729  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	I1122 00:28:23.508711  193714 node_ready.go:49] node "pause-044220" is "Ready"
	I1122 00:28:23.508743  193714 node_ready.go:38] duration metric: took 8.004384383s for node "pause-044220" to be "Ready" ...
	I1122 00:28:23.508761  193714 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:28:23.508808  193714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:28:23.524441  193714 api_server.go:72] duration metric: took 8.167587959s to wait for apiserver process to appear ...
	I1122 00:28:23.524472  193714 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:28:23.524494  193714 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:28:23.532635  193714 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:28:23.533975  193714 api_server.go:141] control plane version: v1.34.1
	I1122 00:28:23.534003  193714 api_server.go:131] duration metric: took 9.523798ms to wait for apiserver health ...
	I1122 00:28:23.534014  193714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:28:23.539644  193714 system_pods.go:59] 7 kube-system pods found
	I1122 00:28:23.539692  193714 system_pods.go:61] "coredns-66bc5c9577-c46n9" [4bf35b5e-3d40-4906-bab5-bb9d0c469a5a] Running
	I1122 00:28:23.539704  193714 system_pods.go:61] "etcd-pause-044220" [3bab319e-1c57-4d25-a674-2c8937af44d1] Running
	I1122 00:28:23.539741  193714 system_pods.go:61] "kindnet-6vbjb" [f6763d92-62c4-408f-b9da-9cfc56ce9326] Running
	I1122 00:28:23.539752  193714 system_pods.go:61] "kube-apiserver-pause-044220" [9930ef77-4401-43fb-912d-b571f3336177] Running
	I1122 00:28:23.539757  193714 system_pods.go:61] "kube-controller-manager-pause-044220" [3a41811c-4cab-4abb-b1ba-e8b21ecb6050] Running
	I1122 00:28:23.539762  193714 system_pods.go:61] "kube-proxy-lpz2b" [280f135b-a7a5-4abd-b233-b03ad2e60a2f] Running
	I1122 00:28:23.539767  193714 system_pods.go:61] "kube-scheduler-pause-044220" [041fc1ef-3cd7-41a0-b7d3-c7215a087516] Running
	I1122 00:28:23.539776  193714 system_pods.go:74] duration metric: took 5.754011ms to wait for pod list to return data ...
	I1122 00:28:23.539785  193714 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:28:23.542778  193714 default_sa.go:45] found service account: "default"
	I1122 00:28:23.542799  193714 default_sa.go:55] duration metric: took 3.006689ms for default service account to be created ...
	I1122 00:28:23.542810  193714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:28:23.547157  193714 system_pods.go:86] 7 kube-system pods found
	I1122 00:28:23.547183  193714 system_pods.go:89] "coredns-66bc5c9577-c46n9" [4bf35b5e-3d40-4906-bab5-bb9d0c469a5a] Running
	I1122 00:28:23.547190  193714 system_pods.go:89] "etcd-pause-044220" [3bab319e-1c57-4d25-a674-2c8937af44d1] Running
	I1122 00:28:23.547195  193714 system_pods.go:89] "kindnet-6vbjb" [f6763d92-62c4-408f-b9da-9cfc56ce9326] Running
	I1122 00:28:23.547202  193714 system_pods.go:89] "kube-apiserver-pause-044220" [9930ef77-4401-43fb-912d-b571f3336177] Running
	I1122 00:28:23.547208  193714 system_pods.go:89] "kube-controller-manager-pause-044220" [3a41811c-4cab-4abb-b1ba-e8b21ecb6050] Running
	I1122 00:28:23.547213  193714 system_pods.go:89] "kube-proxy-lpz2b" [280f135b-a7a5-4abd-b233-b03ad2e60a2f] Running
	I1122 00:28:23.547218  193714 system_pods.go:89] "kube-scheduler-pause-044220" [041fc1ef-3cd7-41a0-b7d3-c7215a087516] Running
	I1122 00:28:23.547227  193714 system_pods.go:126] duration metric: took 4.410206ms to wait for k8s-apps to be running ...
	I1122 00:28:23.547236  193714 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:28:23.547283  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:23.565366  193714 system_svc.go:56] duration metric: took 18.119197ms WaitForService to wait for kubelet
	I1122 00:28:23.565454  193714 kubeadm.go:587] duration metric: took 8.208605629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:28:23.565493  193714 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:28:23.568906  193714 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:28:23.569007  193714 node_conditions.go:123] node cpu capacity is 8
	I1122 00:28:23.569044  193714 node_conditions.go:105] duration metric: took 3.508228ms to run NodePressure ...
	I1122 00:28:23.569077  193714 start.go:242] waiting for startup goroutines ...
	I1122 00:28:23.569087  193714 start.go:247] waiting for cluster config update ...
	I1122 00:28:23.569102  193714 start.go:256] writing updated cluster config ...
	I1122 00:28:23.569414  193714 ssh_runner.go:195] Run: rm -f paused
	I1122 00:28:23.573674  193714 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:28:23.574284  193714 kapi.go:59] client config for pause-044220: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key", CAFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:28:23.577259  193714 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c46n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.583580  193714 pod_ready.go:94] pod "coredns-66bc5c9577-c46n9" is "Ready"
	I1122 00:28:23.583604  193714 pod_ready.go:86] duration metric: took 6.322847ms for pod "coredns-66bc5c9577-c46n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.585886  193714 pod_ready.go:83] waiting for pod "etcd-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.591597  193714 pod_ready.go:94] pod "etcd-pause-044220" is "Ready"
	I1122 00:28:23.591622  193714 pod_ready.go:86] duration metric: took 5.714092ms for pod "etcd-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.593980  193714 pod_ready.go:83] waiting for pod "kube-apiserver-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.598902  193714 pod_ready.go:94] pod "kube-apiserver-pause-044220" is "Ready"
	I1122 00:28:23.598919  193714 pod_ready.go:86] duration metric: took 4.915662ms for pod "kube-apiserver-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.601147  193714 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.977636  193714 pod_ready.go:94] pod "kube-controller-manager-pause-044220" is "Ready"
	I1122 00:28:23.977666  193714 pod_ready.go:86] duration metric: took 376.499583ms for pod "kube-controller-manager-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.178769  193714 pod_ready.go:83] waiting for pod "kube-proxy-lpz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.577681  193714 pod_ready.go:94] pod "kube-proxy-lpz2b" is "Ready"
	I1122 00:28:24.577709  193714 pod_ready.go:86] duration metric: took 398.913529ms for pod "kube-proxy-lpz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.777687  193714 pod_ready.go:83] waiting for pod "kube-scheduler-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:25.178008  193714 pod_ready.go:94] pod "kube-scheduler-pause-044220" is "Ready"
	I1122 00:28:25.178038  193714 pod_ready.go:86] duration metric: took 400.328668ms for pod "kube-scheduler-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:25.178075  193714 pod_ready.go:40] duration metric: took 1.604346914s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:28:25.243596  193714 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:28:25.246167  193714 out.go:179] * Done! kubectl is now configured to use "pause-044220" cluster and "default" namespace by default
	I1122 00:28:23.412063  193202 out.go:204]   - Booting up control plane ...
	I1122 00:28:23.412234  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:23.412335  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:23.413209  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:23.424254  193202 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:23.425108  193202 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:23.425210  193202 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 00:28:23.507941  193202 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:28:22.486979  185312 out.go:204]   - Booting up control plane ...
	I1122 00:28:22.487121  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:22.487218  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:22.488240  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:22.497461  185312 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:22.498397  185312 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:22.498451  185312 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 00:28:22.573433  185312 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:28:22.901515  194936 out.go:252]   - Generating certificates and keys ...
	I1122 00:28:22.901605  194936 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:28:22.901709  194936 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:23.059824  194936 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:23.214515  194936 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:23.680992  194936 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:23.890747  194936 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:24.193141  194936 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:24.193335  194936 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-624739 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:28:24.452194  194936 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:24.452462  194936 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-624739 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:28:25.038420  194936 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:25.230401  194936 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:25.279825  194936 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:28:25.279911  194936 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:25.536418  194936 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:25.685008  194936 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:28:26.090469  194936 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:26.756010  194936 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:27.283336  194936 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:27.284330  194936 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:27.287624  194936 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:28:27.289689  194936 out.go:252]   - Booting up control plane ...
	I1122 00:28:27.289769  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:27.289837  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:27.289896  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:27.302690  194936 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:27.302833  194936 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:28:27.309368  194936 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:28:27.309635  194936 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:27.309693  194936 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:28:27.404942  194936 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:28:27.405143  194936 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68015529Z" level=info msg="RDT not available in the host system"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.680171395Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68131596Z" level=info msg="Conmon does support the --sync option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.681343816Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68136232Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.682208087Z" level=info msg="Conmon does support the --sync option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68222725Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68778503Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68781304Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.688704586Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.689228958Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.689300831Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.798776299Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-c46n9 Namespace:kube-system ID:e4ed7db197e377e4e2e094506d0ab4c6c05a4c1118a4b22aa7919bc00d18d078 UID:4bf35b5e-3d40-4906-bab5-bb9d0c469a5a NetNS:/var/run/netns/222719f3-8c23-4f61-b149-1dbf8729b62c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00039c540}] Aliases:map[]}"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799045315Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-c46n9 for CNI network kindnet (type=ptp)"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799711217Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799736929Z" level=info msg="Starting seccomp notifier watcher"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79979607Z" level=info msg="Create NRI interface"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79990969Z" level=info msg="built-in NRI default validator is disabled"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799925401Z" level=info msg="runtime interface created"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79994001Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79994832Z" level=info msg="runtime interface starting up..."
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799955975Z" level=info msg="starting plugins..."
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79997092Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.800384545Z" level=info msg="No systemd watchdog enabled"
	Nov 22 00:28:13 pause-044220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e462ee7150306       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   28 seconds ago      Running             coredns                   0                   e4ed7db197e37       coredns-66bc5c9577-c46n9               kube-system
	f4bcdbc163f62       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   40 seconds ago      Running             kindnet-cni               0                   a5001a3132b92       kindnet-6vbjb                          kube-system
	0e4ad5609f787       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   40 seconds ago      Running             kube-proxy                0                   f99e2251ff6a4       kube-proxy-lpz2b                       kube-system
	3e14061fd4fcc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   50 seconds ago      Running             kube-scheduler            0                   d5209a6d6688f       kube-scheduler-pause-044220            kube-system
	ecabd39636370       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   51 seconds ago      Running             kube-controller-manager   0                   883b544295ec8       kube-controller-manager-pause-044220   kube-system
	79e36c09e9fdb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   51 seconds ago      Running             kube-apiserver            0                   66879fb501230       kube-apiserver-pause-044220            kube-system
	b4370898f997f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   51 seconds ago      Running             etcd                      0                   4e4fe2f00250b       etcd-pause-044220                      kube-system
	
	
	==> coredns [e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49709 - 8295 "HINFO IN 7893672033695571944.9016515455700446387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093652726s
	
	
	==> describe nodes <==
	Name:               pause-044220
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-044220
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=pause-044220
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_27_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:27:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-044220
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:28:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:28:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-044220
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                b5805957-782e-4cab-938a-26ad2cd52f0e
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c46n9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     41s
	  kube-system                 etcd-pause-044220                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         46s
	  kube-system                 kindnet-6vbjb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      41s
	  kube-system                 kube-apiserver-pause-044220             250m (3%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-pause-044220    200m (2%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-lpz2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-scheduler-pause-044220             100m (1%)     0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 40s               kube-proxy       
	  Normal  Starting                 46s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s               kubelet          Node pause-044220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s               kubelet          Node pause-044220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s               kubelet          Node pause-044220 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s               node-controller  Node pause-044220 event: Registered Node pause-044220 in Controller
	  Normal  NodeNotReady             16s               kubelet          Node pause-044220 status is now: NodeNotReady
	  Normal  NodeReady                5s (x2 over 29s)  kubelet          Node pause-044220 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955] <==
	{"level":"warn","ts":"2025-11-22T00:27:39.298095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.307800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.316476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.376117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:27:48.355299Z","caller":"traceutil/trace.go:172","msg":"trace[1742722197] linearizableReadLoop","detail":"{readStateIndex:361; appliedIndex:361; }","duration":"113.951812ms","start":"2025-11-22T00:27:48.241323Z","end":"2025-11-22T00:27:48.355275Z","steps":["trace[1742722197] 'read index received'  (duration: 113.939621ms)","trace[1742722197] 'applied index is now lower than readState.Index'  (duration: 11.201µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:48.355416Z","caller":"traceutil/trace.go:172","msg":"trace[2122983945] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"128.817561ms","start":"2025-11-22T00:27:48.226583Z","end":"2025-11-22T00:27:48.355401Z","steps":["trace[2122983945] 'process raft request'  (duration: 128.711343ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:27:48.355580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.211923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-22T00:27:48.355632Z","caller":"traceutil/trace.go:172","msg":"trace[655372888] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"114.309768ms","start":"2025-11-22T00:27:48.241312Z","end":"2025-11-22T00:27:48.355622Z","steps":["trace[655372888] 'agreement among raft nodes before linearized reading'  (duration: 114.050454ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.516078Z","caller":"traceutil/trace.go:172","msg":"trace[1940888544] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"134.112516ms","start":"2025-11-22T00:27:48.381927Z","end":"2025-11-22T00:27:48.516040Z","steps":["trace[1940888544] 'process raft request'  (duration: 133.877542ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.516126Z","caller":"traceutil/trace.go:172","msg":"trace[971567281] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"135.71062ms","start":"2025-11-22T00:27:48.380396Z","end":"2025-11-22T00:27:48.516106Z","steps":["trace[971567281] 'process raft request'  (duration: 81.009186ms)","trace[971567281] 'compare'  (duration: 54.296647ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:48.880689Z","caller":"traceutil/trace.go:172","msg":"trace[1494015599] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"121.658683ms","start":"2025-11-22T00:27:48.759014Z","end":"2025-11-22T00:27:48.880673Z","steps":["trace[1494015599] 'process raft request'  (duration: 121.617537ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.880758Z","caller":"traceutil/trace.go:172","msg":"trace[1543492353] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"176.54715ms","start":"2025-11-22T00:27:48.704188Z","end":"2025-11-22T00:27:48.880735Z","steps":["trace[1543492353] 'process raft request'  (duration: 135.399665ms)","trace[1543492353] 'compare'  (duration: 40.924801ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:49.011605Z","caller":"traceutil/trace.go:172","msg":"trace[309240107] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"118.932143ms","start":"2025-11-22T00:27:48.892659Z","end":"2025-11-22T00:27:49.011592Z","steps":["trace[309240107] 'process raft request'  (duration: 118.842218ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:49.011603Z","caller":"traceutil/trace.go:172","msg":"trace[1081551917] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"122.622019ms","start":"2025-11-22T00:27:48.888914Z","end":"2025-11-22T00:27:49.011536Z","steps":["trace[1081551917] 'process raft request'  (duration: 116.11108ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:59.942992Z","caller":"traceutil/trace.go:172","msg":"trace[945982709] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"164.339708ms","start":"2025-11-22T00:27:59.778627Z","end":"2025-11-22T00:27:59.942967Z","steps":["trace[945982709] 'process raft request'  (duration: 87.726366ms)","trace[945982709] 'compare'  (duration: 76.451701ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:28:12.989855Z","caller":"traceutil/trace.go:172","msg":"trace[1571593150] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"141.359678ms","start":"2025-11-22T00:28:12.848480Z","end":"2025-11-22T00:28:12.989840Z","steps":["trace[1571593150] 'process raft request'  (duration: 141.281736ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.180803Z","caller":"traceutil/trace.go:172","msg":"trace[1566919365] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"190.304294ms","start":"2025-11-22T00:28:12.990479Z","end":"2025-11-22T00:28:13.180783Z","steps":["trace[1566919365] 'read index received'  (duration: 190.294381ms)","trace[1566919365] 'applied index is now lower than readState.Index'  (duration: 8.748µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:28:13.180909Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.410163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:28:13.180937Z","caller":"traceutil/trace.go:172","msg":"trace[1735423235] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"190.454874ms","start":"2025-11-22T00:28:12.990475Z","end":"2025-11-22T00:28:13.180930Z","steps":["trace[1735423235] 'agreement among raft nodes before linearized reading'  (duration: 190.379323ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.180934Z","caller":"traceutil/trace.go:172","msg":"trace[81478439] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"330.650227ms","start":"2025-11-22T00:28:12.850267Z","end":"2025-11-22T00:28:13.180917Z","steps":["trace[81478439] 'process raft request'  (duration: 330.541385ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:28:13.181480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:28:12.850253Z","time spent":"330.728674ms","remote":"127.0.0.1:34400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5547,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-044220\" mod_revision:385 > success:<request_put:<key:\"/registry/minions/pause-044220\" value_size:5509 >> failure:<request_range:<key:\"/registry/minions/pause-044220\" > >"}
	{"level":"info","ts":"2025-11-22T00:28:13.429453Z","caller":"traceutil/trace.go:172","msg":"trace[895717490] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:426; }","duration":"200.772499ms","start":"2025-11-22T00:28:13.228661Z","end":"2025-11-22T00:28:13.429434Z","steps":["trace[895717490] 'read index received'  (duration: 200.765591ms)","trace[895717490] 'applied index is now lower than readState.Index'  (duration: 5.926µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:28:13.429551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.895028ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:28:13.429575Z","caller":"traceutil/trace.go:172","msg":"trace[745545141] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:407; }","duration":"200.933162ms","start":"2025-11-22T00:28:13.228634Z","end":"2025-11-22T00:28:13.429567Z","steps":["trace[745545141] 'agreement among raft nodes before linearized reading'  (duration: 200.877572ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.429590Z","caller":"traceutil/trace.go:172","msg":"trace[270600498] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"242.272588ms","start":"2025-11-22T00:28:13.187309Z","end":"2025-11-22T00:28:13.429582Z","steps":["trace[270600498] 'process raft request'  (duration: 242.157757ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:28:29 up  1:10,  0 user,  load average: 5.15, 2.17, 1.27
	Linux pause-044220 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2] <==
	I1122 00:27:49.078662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:27:49.079195       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:27:49.079399       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:27:49.079460       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:27:49.079511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:27:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:27:49.375570       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:27:49.375624       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:27:49.375638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:27:49.378588       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:27:49.675706       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:27:49.675741       1 metrics.go:72] Registering metrics
	I1122 00:27:49.675802       1 controller.go:711] "Syncing nftables rules"
	I1122 00:27:59.285950       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:27:59.286013       1 main.go:301] handling current node
	I1122 00:28:09.286330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:28:09.286368       1 main.go:301] handling current node
	I1122 00:28:19.290139       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:28:19.290189       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5] <==
	I1122 00:27:40.291301       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:40.291382       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:27:40.302973       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:40.303047       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:27:40.304493       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1122 00:27:40.314940       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1122 00:27:40.329104       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:27:40.519563       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:27:41.076185       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:27:41.080129       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:27:41.080158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:27:41.580425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:27:41.619989       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:27:41.693960       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:27:41.705223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:27:41.707476       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:27:41.712234       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:27:41.740879       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:27:42.552257       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:27:42.560209       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:27:42.565774       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:27:46.741234       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:27:47.840447       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:27:47.891857       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:47.896510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c] <==
	I1122 00:27:46.736826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:27:46.736833       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:27:46.736840       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-044220"
	I1122 00:27:46.736888       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:27:46.738253       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:27:46.738336       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:27:46.738456       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:27:46.738464       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:27:46.738261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:27:46.738493       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:27:46.738498       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:27:46.738638       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:27:46.738701       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:27:46.739663       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:27:46.739944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:27:46.740106       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:27:46.743965       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:27:46.745421       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:27:46.746549       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:27:46.751686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:27:46.751812       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:27:46.760168       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:28:01.739299       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1122 00:28:16.740182       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:28:26.741561       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232] <==
	I1122 00:27:48.732245       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:27:48.787720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:27:48.889160       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:27:48.889328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:27:48.889494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:27:48.909024       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:27:48.909115       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:27:48.914088       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:27:48.914436       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:27:48.914476       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:27:48.916433       1 config.go:200] "Starting service config controller"
	I1122 00:27:48.916454       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:27:48.916703       1 config.go:309] "Starting node config controller"
	I1122 00:27:48.916722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:27:48.915873       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:27:48.916904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:27:48.917025       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:27:48.917035       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:27:49.017309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:27:49.018249       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:27:49.018386       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:27:49.018400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba] <==
	E1122 00:27:40.228235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:27:40.228307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:27:40.228346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:27:40.228404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:27:40.228446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:27:40.228554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:27:40.228749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:27:40.228795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:27:40.228837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:27:40.228876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:27:40.228926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:27:40.228966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:27:40.229033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:27:41.121250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:27:41.162318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:27:41.177345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:27:41.204540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:27:41.234428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:27:41.274182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:27:41.280150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:27:41.301096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:27:41.334628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:27:41.362982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:27:41.408150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1122 00:27:44.014925       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394670    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394690    1302 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394707    1302 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.410808    1302 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter=""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.410856    1302 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.412953    1302 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.412987    1302 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.435571    1302 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.435609    1302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: missing image stats: <nil>"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464936    1302 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464968    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464985    1302 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.480382    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.623481    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.846529    1302 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: I1122 00:28:12.846618    1302 setters.go:543] "Node became not ready" node="pause-044220" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:28:12Z","lastTransitionTime":"2025-11-22T00:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.910434    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: W1122 00:28:13.263817    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.465940    1302 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.465992    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.466011    1302 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:25 pause-044220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:28:25 pause-044220 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:28:25 pause-044220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:28:25 pause-044220 systemd[1]: kubelet.service: Consumed 1.408s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-044220 -n pause-044220
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-044220 -n pause-044220: exit status 2 (426.764371ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-044220 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-044220
helpers_test.go:243: (dbg) docker inspect pause-044220:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440",
	        "Created": "2025-11-22T00:27:23.717910492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 186483,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:27:23.763884218Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/hostname",
	        "HostsPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/hosts",
	        "LogPath": "/var/lib/docker/containers/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440/02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440-json.log",
	        "Name": "/pause-044220",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-044220:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-044220",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "02e81c3924546be35753c49b9bd112bf70ebe28ddad6aec659918fdca1330440",
	                "LowerDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba74c1e54086f0f29631408ea6b72478e36f982e9c0fc0c25731651dcc2442a8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-044220",
	                "Source": "/var/lib/docker/volumes/pause-044220/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-044220",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-044220",
	                "name.minikube.sigs.k8s.io": "pause-044220",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "767d608a3a6d89c165f823ffe414ded3d9c14f0a4cc7603ea37b610fd262784c",
	            "SandboxKey": "/var/run/docker/netns/767d608a3a6d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32977"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32975"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32976"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-044220": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cf1125a89f944b928e1da3985e14afd6320515efbd15f4b428e9b91fbf80e100",
	                    "EndpointID": "9974f353b9f4169828348be36f142228506f262548888847d789bffed4a920ab",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:51:fa:f3:2f:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-044220",
	                        "02e81c392454"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-044220 -n pause-044220
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-044220 -n pause-044220: exit status 2 (482.022627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-044220 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-044220 logs -n 25: (1.247964463s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-417192                                                                                            │ test-preload-417192         │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ start   │ -p scheduled-stop-366786 --memory=3072 --driver=docker  --container-runtime=crio                                  │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr                                                     │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --cancel-scheduled                                                                       │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:25 UTC │ 22 Nov 25 00:25 UTC │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ stop    │ -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr                                                    │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │ 22 Nov 25 00:26 UTC │
	│ delete  │ -p scheduled-stop-366786                                                                                          │ scheduled-stop-366786       │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │ 22 Nov 25 00:26 UTC │
	│ start   │ -p insufficient-storage-310459 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio  │ insufficient-storage-310459 │ jenkins │ v1.37.0 │ 22 Nov 25 00:26 UTC │                     │
	│ delete  │ -p insufficient-storage-310459                                                                                    │ insufficient-storage-310459 │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p pause-044220 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio         │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p force-systemd-env-087837 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio        │ force-systemd-env-087837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p offline-crio-033967 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ offline-crio-033967         │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p stopped-upgrade-220412 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ stopped-upgrade-220412      │ jenkins │ v1.32.0 │ 22 Nov 25 00:27 UTC │                     │
	│ delete  │ -p force-systemd-env-087837                                                                                       │ force-systemd-env-087837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:27 UTC │ 22 Nov 25 00:27 UTC │
	│ start   │ -p running-upgrade-670577 --memory=3072 --vm-driver=docker  --container-runtime=crio                              │ running-upgrade-670577      │ jenkins │ v1.32.0 │ 22 Nov 25 00:27 UTC │                     │
	│ start   │ -p pause-044220 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                  │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │ 22 Nov 25 00:28 UTC │
	│ delete  │ -p offline-crio-033967                                                                                            │ offline-crio-033967         │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │ 22 Nov 25 00:28 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio            │ cert-expiration-624739      │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │                     │
	│ pause   │ -p pause-044220 --alsologtostderr -v=5                                                                            │ pause-044220                │ jenkins │ v1.37.0 │ 22 Nov 25 00:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:28:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:28:07.871356  194936 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:28:07.871463  194936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:07.871466  194936 out.go:374] Setting ErrFile to fd 2...
	I1122 00:28:07.871469  194936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:28:07.871732  194936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:28:07.872190  194936 out.go:368] Setting JSON to false
	I1122 00:28:07.873266  194936 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4237,"bootTime":1763767051,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:28:07.873324  194936 start.go:143] virtualization: kvm guest
	I1122 00:28:07.878483  194936 out.go:179] * [cert-expiration-624739] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:28:07.879684  194936 notify.go:221] Checking for updates...
	I1122 00:28:07.879709  194936 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:28:07.880856  194936 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:28:07.882024  194936 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:28:07.883236  194936 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:28:07.884249  194936 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:28:07.886651  194936 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:28:07.888966  194936 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:07.889120  194936 config.go:182] Loaded profile config "running-upgrade-670577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:07.889247  194936 config.go:182] Loaded profile config "stopped-upgrade-220412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:07.889377  194936 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:28:07.929222  194936 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:28:07.929354  194936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:28:08.002702  194936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-22 00:28:07.989739604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:28:08.002842  194936 docker.go:319] overlay module found
	I1122 00:28:08.008483  194936 out.go:179] * Using the docker driver based on user configuration
	I1122 00:28:08.009557  194936 start.go:309] selected driver: docker
	I1122 00:28:08.009566  194936 start.go:930] validating driver "docker" against <nil>
	I1122 00:28:08.009580  194936 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:28:08.010399  194936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:28:08.081088  194936 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:60 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-22 00:28:08.0696759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:28:08.081244  194936 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:28:08.081432  194936 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1122 00:28:08.082888  194936 out.go:179] * Using Docker driver with root privileges
	I1122 00:28:08.084061  194936 cni.go:84] Creating CNI manager for ""
	I1122 00:28:08.084137  194936 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:08.084147  194936 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:28:08.084244  194936 start.go:353] cluster config:
	{Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:08.085540  194936 out.go:179] * Starting "cert-expiration-624739" primary control-plane node in "cert-expiration-624739" cluster
	I1122 00:28:08.086602  194936 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:28:08.087663  194936 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:28:08.088687  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:08.088767  194936 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:28:08.088822  194936 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:28:08.088834  194936 cache.go:65] Caching tarball of preloaded images
	I1122 00:28:08.088941  194936 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:28:08.088949  194936 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:28:08.089095  194936 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json ...
	I1122 00:28:08.089118  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json: {Name:mk6ad289b63cf9798b64fb02b5d9656644a7d337 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:08.121701  194936 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:28:08.121715  194936 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:28:08.121733  194936 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:28:08.121768  194936 start.go:360] acquireMachinesLock for cert-expiration-624739: {Name:mk3e7a6e0a4875a636ffa6046666b41f1179e198 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:28:08.121867  194936 start.go:364] duration metric: took 82.015µs to acquireMachinesLock for "cert-expiration-624739"
	I1122 00:28:08.121890  194936 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:28:08.121970  194936 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:28:03.695928  193714 out.go:252] * Updating the running docker "pause-044220" container ...
	I1122 00:28:03.695972  193714 machine.go:94] provisionDockerMachine start ...
	I1122 00:28:03.696029  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:03.715429  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:03.715789  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:03.715807  193714 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:28:03.838121  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-044220
	
	I1122 00:28:03.838153  193714 ubuntu.go:182] provisioning hostname "pause-044220"
	I1122 00:28:03.838217  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:03.855437  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:03.855689  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:03.855703  193714 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-044220 && echo "pause-044220" | sudo tee /etc/hostname
	I1122 00:28:03.983173  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-044220
	
	I1122 00:28:03.983234  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.002791  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:04.003117  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:04.003146  193714 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-044220' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-044220/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-044220' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:04.122959  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:04.122986  193714 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:04.123035  193714 ubuntu.go:190] setting up certificates
	I1122 00:28:04.123074  193714 provision.go:84] configureAuth start
	I1122 00:28:04.123125  193714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-044220
	I1122 00:28:04.140331  193714 provision.go:143] copyHostCerts
	I1122 00:28:04.140379  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:04.140409  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:04.168203  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:04.168365  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:04.168378  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:04.168416  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:04.168473  193714 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:04.168481  193714 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:04.168509  193714 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:04.168561  193714 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.pause-044220 san=[127.0.0.1 192.168.76.2 localhost minikube pause-044220]
	I1122 00:28:04.231530  193714 provision.go:177] copyRemoteCerts
	I1122 00:28:04.231595  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:04.231648  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.249703  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:04.340957  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:04.364148  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:28:04.384358  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:28:04.403456  193714 provision.go:87] duration metric: took 280.369866ms to configureAuth
	I1122 00:28:04.403478  193714 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:28:04.403705  193714 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:04.403819  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:04.421377  193714 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:04.421670  193714 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32973 <nil> <nil>}
	I1122 00:28:04.421696  193714 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:07.077834  193714 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:07.077867  193714 machine.go:97] duration metric: took 3.381884648s to provisionDockerMachine
	I1122 00:28:07.077882  193714 start.go:293] postStartSetup for "pause-044220" (driver="docker")
	I1122 00:28:07.077895  193714 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:07.077961  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:07.078017  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.099184  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.210843  193714 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:07.215254  193714 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:07.215289  193714 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:28:07.215303  193714 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:07.215374  193714 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:07.215488  193714 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:07.215632  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:07.229533  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:07.293301  193714 start.go:296] duration metric: took 215.40153ms for postStartSetup
	I1122 00:28:07.293396  193714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:07.293438  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.318147  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.408462  193714 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:07.413628  193714 fix.go:56] duration metric: took 3.741934847s for fixHost
	I1122 00:28:07.413655  193714 start.go:83] releasing machines lock for "pause-044220", held for 3.741989246s
	I1122 00:28:07.413729  193714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-044220
	I1122 00:28:07.432704  193714 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:07.432763  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.432775  193714 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:07.432845  193714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-044220
	I1122 00:28:07.450678  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.451604  193714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32973 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/pause-044220/id_rsa Username:docker}
	I1122 00:28:07.604711  193714 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:07.613595  193714 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:07.655707  193714 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:28:07.660705  193714 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:28:07.660774  193714 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:07.669865  193714 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:28:07.669891  193714 start.go:496] detecting cgroup driver to use...
	I1122 00:28:07.669920  193714 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:28:07.669963  193714 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:07.684801  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:07.701218  193714 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:28:07.701369  193714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:07.721117  193714 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:07.739696  193714 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:07.873936  193714 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:08.030151  193714 docker.go:234] disabling docker service ...
	I1122 00:28:08.030225  193714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:08.051961  193714 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:08.070245  193714 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:08.220630  193714 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:08.376086  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:08.388960  193714 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:08.410567  193714 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:28:08.410649  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:07.296467  193202 cli_runner.go:217] Completed: docker run --rm --name running-upgrade-670577-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-670577 --entrypoint /usr/bin/test -v running-upgrade-670577:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (9.507077812s)
	I1122 00:28:07.296496  193202 oci.go:107] Successfully prepared a docker volume running-upgrade-670577
	I1122 00:28:07.296532  193202 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:07.296563  193202 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:07.296624  193202 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-670577:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:07.586366  185312 cli_runner.go:217] Completed: docker run --rm --name stopped-upgrade-220412-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-220412 --entrypoint /usr/bin/test -v stopped-upgrade-220412:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib: (17.681611163s)
	I1122 00:28:07.586393  185312 oci.go:107] Successfully prepared a docker volume stopped-upgrade-220412
	I1122 00:28:07.586414  185312 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:07.586447  185312 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:07.586531  185312 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-220412:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:08.127776  194936 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:28:08.128075  194936 start.go:159] libmachine.API.Create for "cert-expiration-624739" (driver="docker")
	I1122 00:28:08.128110  194936 client.go:173] LocalClient.Create starting
	I1122 00:28:08.128189  194936 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:28:08.128224  194936 main.go:143] libmachine: Decoding PEM data...
	I1122 00:28:08.128249  194936 main.go:143] libmachine: Parsing certificate...
	I1122 00:28:08.128311  194936 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:28:08.128336  194936 main.go:143] libmachine: Decoding PEM data...
	I1122 00:28:08.128354  194936 main.go:143] libmachine: Parsing certificate...
	I1122 00:28:08.128826  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:28:08.148910  194936 cli_runner.go:211] docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:28:08.148974  194936 network_create.go:284] running [docker network inspect cert-expiration-624739] to gather additional debugging logs...
	I1122 00:28:08.148987  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739
	W1122 00:28:08.168253  194936 cli_runner.go:211] docker network inspect cert-expiration-624739 returned with exit code 1
	I1122 00:28:08.168291  194936 network_create.go:287] error running [docker network inspect cert-expiration-624739]: docker network inspect cert-expiration-624739: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-624739 not found
	I1122 00:28:08.168304  194936 network_create.go:289] output of [docker network inspect cert-expiration-624739]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-624739 not found
	
	** /stderr **
	I1122 00:28:08.168454  194936 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:08.192537  194936 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:28:08.193399  194936 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:28:08.194191  194936 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:28:08.195106  194936 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cf1125a89f94 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:42:f2:07:fa:fd:c9} reservation:<nil>}
	I1122 00:28:08.196066  194936 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-77229b827ce8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:9b:8d:69:33:c2} reservation:<nil>}
	I1122 00:28:08.197274  194936 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5c80}
	I1122 00:28:08.197302  194936 network_create.go:124] attempt to create docker network cert-expiration-624739 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:28:08.197376  194936 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-624739 cert-expiration-624739
	I1122 00:28:08.255249  194936 network_create.go:108] docker network cert-expiration-624739 192.168.94.0/24 created
	I1122 00:28:08.255272  194936 kic.go:121] calculated static IP "192.168.94.2" for the "cert-expiration-624739" container
	I1122 00:28:08.255350  194936 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:28:08.280175  194936 cli_runner.go:164] Run: docker volume create cert-expiration-624739 --label name.minikube.sigs.k8s.io=cert-expiration-624739 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:28:08.299377  194936 oci.go:103] Successfully created a docker volume cert-expiration-624739
	I1122 00:28:08.299467  194936 cli_runner.go:164] Run: docker run --rm --name cert-expiration-624739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --entrypoint /usr/bin/test -v cert-expiration-624739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:28:08.474991  193714 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:08.475108  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.533608  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.597134  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.660414  193714 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:08.669242  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.719810  193714 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.729513  193714 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:08.774231  193714 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:08.782896  193714 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:08.790886  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:08.908454  193714 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:13.804939  193714 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.896446263s)
	I1122 00:28:13.804970  193714 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:13.805020  193714 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:13.809713  193714 start.go:564] Will wait 60s for crictl version
	I1122 00:28:13.809770  193714 ssh_runner.go:195] Run: which crictl
	I1122 00:28:13.813742  193714 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:28:13.848581  193714 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:28:13.848656  193714 ssh_runner.go:195] Run: crio --version
	I1122 00:28:13.886425  193714 ssh_runner.go:195] Run: crio --version
	I1122 00:28:13.923660  193714 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:28:13.717178  193202 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v running-upgrade-670577:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.420506165s)
	I1122 00:28:13.717209  193202 kic.go:203] duration metric: took 6.420644 seconds to extract preloaded images to volume
	W1122 00:28:13.717320  193202 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:13.717370  193202 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:13.717419  193202 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:13.797265  193202 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname running-upgrade-670577 --name running-upgrade-670577 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=running-upgrade-670577 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=running-upgrade-670577 --network running-upgrade-670577 --ip 192.168.103.2 --volume running-upgrade-670577:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1122 00:28:14.176913  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Running}}
	I1122 00:28:14.201441  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.222423  193202 cli_runner.go:164] Run: docker exec running-upgrade-670577 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:14.271371  193202 oci.go:144] the created container "running-upgrade-670577" has a running status.
	I1122 00:28:14.271413  193202 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa...
	I1122 00:28:14.700600  193202 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:14.841574  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.866600  193202 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:14.866616  193202 kic_runner.go:114] Args: [docker exec --privileged running-upgrade-670577 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:14.943751  193202 cli_runner.go:164] Run: docker container inspect running-upgrade-670577 --format={{.State.Status}}
	I1122 00:28:14.972997  193202 machine.go:88] provisioning docker machine ...
	I1122 00:28:14.973233  193202 ubuntu.go:169] provisioning hostname "running-upgrade-670577"
	I1122 00:28:14.973330  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:14.999555  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.000405  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.000423  193202 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-670577 && echo "running-upgrade-670577" | sudo tee /etc/hostname
	I1122 00:28:15.153602  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-670577
	
	I1122 00:28:15.153665  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.176308  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.176805  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.176828  193202 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-670577' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-670577/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-670577' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:13.925089  193714 cli_runner.go:164] Run: docker network inspect pause-044220 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:13.945208  193714 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:13.950992  193714 kubeadm.go:884] updating cluster {Name:pause-044220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:28:13.951144  193714 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:13.951185  193714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:13.989577  193714 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:13.989603  193714 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:28:13.989672  193714 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:14.025650  193714 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:14.025677  193714 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:28:14.025685  193714 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:28:14.025783  193714 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-044220 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:28:14.025852  193714 ssh_runner.go:195] Run: crio config
	I1122 00:28:14.079865  193714 cni.go:84] Creating CNI manager for ""
	I1122 00:28:14.079909  193714 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:14.079937  193714 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:14.079977  193714 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-044220 NodeName:pause-044220 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:14.080224  193714 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-044220"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:14.080307  193714 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:28:14.089311  193714 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:28:14.089382  193714 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:14.096844  193714 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1122 00:28:14.110234  193714 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:14.129462  193714 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1122 00:28:14.142629  193714 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:14.146394  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:14.282520  193714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:14.302004  193714 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220 for IP: 192.168.76.2
	I1122 00:28:14.302034  193714 certs.go:195] generating shared ca certs ...
	I1122 00:28:14.302076  193714 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:14.302265  193714 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:14.302324  193714 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:14.302341  193714 certs.go:257] generating profile certs ...
	I1122 00:28:14.302457  193714 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key
	I1122 00:28:14.302534  193714 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.key.33726e52
	I1122 00:28:14.302585  193714 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.key
	I1122 00:28:14.302814  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:14.302859  193714 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:14.302888  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:14.302924  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:14.302977  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:14.303009  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:14.303182  193714 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:14.304487  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:14.332251  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:14.359280  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:14.382301  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:14.404480  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:28:14.428761  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:28:14.455692  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:14.481039  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:14.507561  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:14.540974  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:14.581587  193714 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:14.644875  193714 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:14.701998  193714 ssh_runner.go:195] Run: openssl version
	I1122 00:28:14.709625  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:14.719935  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.724360  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.724411  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:14.782794  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:14.792934  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:14.807941  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.812619  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.812670  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:14.848165  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:14.861634  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:14.877357  193714 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.885564  193714 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.885651  193714 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:14.961694  193714 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:14.974256  193714 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:28:14.980013  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:28:15.039772  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:28:15.084548  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:28:15.129169  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:28:15.174191  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:28:15.227674  193714 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:28:15.281616  193714 kubeadm.go:401] StartCluster: {Name:pause-044220 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-044220 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:15.281759  193714 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:15.281827  193714 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:15.311105  193714 cri.go:89] found id: "e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d"
	I1122 00:28:15.311127  193714 cri.go:89] found id: "f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2"
	I1122 00:28:15.311131  193714 cri.go:89] found id: "0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232"
	I1122 00:28:15.311135  193714 cri.go:89] found id: "3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba"
	I1122 00:28:15.311138  193714 cri.go:89] found id: "ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c"
	I1122 00:28:15.311140  193714 cri.go:89] found id: "79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5"
	I1122 00:28:15.311143  193714 cri.go:89] found id: "b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955"
	I1122 00:28:15.311146  193714 cri.go:89] found id: ""
	I1122 00:28:15.311183  193714 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:28:15.321740  193714 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:28:15Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:28:15.321791  193714 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:15.329378  193714 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:28:15.329394  193714 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:28:15.329431  193714 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:28:15.336165  193714 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:28:15.336650  193714 kubeconfig.go:125] found "pause-044220" server: "https://192.168.76.2:8443"
	I1122 00:28:15.337132  193714 kapi.go:59] client config for pause-044220: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key", CAFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:28:15.337521  193714 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:28:15.337533  193714 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:28:15.337538  193714 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:28:15.337542  193714 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:28:15.337549  193714 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:28:15.338040  193714 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:28:15.346890  193714 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:28:15.346922  193714 kubeadm.go:602] duration metric: took 17.52235ms to restartPrimaryControlPlane
	I1122 00:28:15.346932  193714 kubeadm.go:403] duration metric: took 65.330154ms to StartCluster
	I1122 00:28:15.346951  193714 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:15.347027  193714 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:28:15.347818  193714 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:15.356805  193714 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:28:15.356921  193714 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:28:15.357090  193714 config.go:182] Loaded profile config "pause-044220": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:15.361380  193714 out.go:179] * Enabled addons: 
	I1122 00:28:15.361423  193714 out.go:179] * Verifying Kubernetes components...
	I1122 00:28:13.724111  185312 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v stopped-upgrade-220412:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.137534951s)
	I1122 00:28:13.724147  185312 kic.go:203] duration metric: took 6.137698 seconds to extract preloaded images to volume
	W1122 00:28:13.724257  185312 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:13.724309  185312 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:13.724355  185312 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:13.797286  185312 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-220412 --name stopped-upgrade-220412 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-220412 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-220412 --network stopped-upgrade-220412 --ip 192.168.85.2 --volume stopped-upgrade-220412:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1122 00:28:14.297242  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Running}}
	I1122 00:28:14.320343  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:14.343264  185312 cli_runner.go:164] Run: docker exec stopped-upgrade-220412 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:14.404183  185312 oci.go:144] the created container "stopped-upgrade-220412" has a running status.
	I1122 00:28:14.404209  185312 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa...
	I1122 00:28:14.821937  185312 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:14.891727  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:14.927265  185312 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:14.927286  185312 kic_runner.go:114] Args: [docker exec --privileged stopped-upgrade-220412 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:14.995573  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:15.024497  185312 machine.go:88] provisioning docker machine ...
	I1122 00:28:15.024534  185312 ubuntu.go:169] provisioning hostname "stopped-upgrade-220412"
	I1122 00:28:15.024615  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.049103  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.050569  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.050588  185312 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-220412 && echo "stopped-upgrade-220412" | sudo tee /etc/hostname
	I1122 00:28:15.192552  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-220412
	
	I1122 00:28:15.192636  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.215904  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.216480  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.216504  185312 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-220412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-220412/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-220412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:15.339919  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:15.339942  185312 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:15.339987  185312 ubuntu.go:177] setting up certificates
	I1122 00:28:15.340000  185312 provision.go:83] configureAuth start
	I1122 00:28:15.340048  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:15.359548  185312 provision.go:138] copyHostCerts
	I1122 00:28:15.359602  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:15.359613  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:15.360988  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:15.361135  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:15.361143  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:15.361184  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:15.361265  185312 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:15.361271  185312 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:15.361308  185312 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:15.361368  185312 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-220412 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-220412]
	I1122 00:28:15.584255  185312 provision.go:172] copyRemoteCerts
	I1122 00:28:15.584301  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:15.584336  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.603819  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:15.692739  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:15.718556  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:13.978255  194936 cli_runner.go:217] Completed: docker run --rm --name cert-expiration-624739-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --entrypoint /usr/bin/test -v cert-expiration-624739:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (5.678745918s)
	I1122 00:28:13.978277  194936 oci.go:107] Successfully prepared a docker volume cert-expiration-624739
	I1122 00:28:13.978328  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:13.978337  194936 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:28:13.978406  194936 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-624739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:28:15.363354  193714 addons.go:530] duration metric: took 6.443147ms for enable addons: enabled=[]
	I1122 00:28:15.365108  193714 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:15.489131  193714 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:15.504310  193714 node_ready.go:35] waiting up to 6m0s for node "pause-044220" to be "Ready" ...
	W1122 00:28:17.507809  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	I1122 00:28:15.744184  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1122 00:28:15.769206  185312 provision.go:86] duration metric: configureAuth took 429.192966ms
	I1122 00:28:15.769233  185312 ubuntu.go:193] setting minikube options for container-runtime
	I1122 00:28:15.769392  185312 config.go:182] Loaded profile config "stopped-upgrade-220412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:15.769497  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:15.785955  185312 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.786462  185312 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1122 00:28:15.786503  185312 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:16.013765  185312 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:16.013783  185312 machine.go:91] provisioned docker machine in 989.271734ms
	I1122 00:28:16.013798  185312 client.go:171] LocalClient.Create took 26.261626746s
	I1122 00:28:16.013815  185312 start.go:167] duration metric: libmachine.API.Create for "stopped-upgrade-220412" took 26.261676679s
	I1122 00:28:16.013822  185312 start.go:300] post-start starting for "stopped-upgrade-220412" (driver="docker")
	I1122 00:28:16.013834  185312 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:16.013885  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:16.013923  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.030620  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.120728  185312 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:16.124364  185312 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:16.124397  185312 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1122 00:28:16.124410  185312 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1122 00:28:16.124418  185312 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1122 00:28:16.124430  185312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:16.124492  185312 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:16.124558  185312 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:16.124640  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:16.134632  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:16.160877  185312 start.go:303] post-start completed in 147.041923ms
	I1122 00:28:16.161240  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:16.179977  185312 profile.go:148] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/config.json ...
	I1122 00:28:16.180305  185312 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:16.180349  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.196628  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.280065  185312 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:16.284350  185312 start.go:128] duration metric: createHost completed in 26.534219712s
	I1122 00:28:16.284369  185312 start.go:83] releasing machines lock for "stopped-upgrade-220412", held for 26.534329344s
	I1122 00:28:16.284435  185312 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-220412
	I1122 00:28:16.303395  185312 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:16.303437  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.303481  185312 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:16.303525  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:16.322685  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.323117  185312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/stopped-upgrade-220412/id_rsa Username:docker}
	I1122 00:28:16.501646  185312 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:16.506651  185312 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:16.645781  185312 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1122 00:28:16.650369  185312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.677683  185312 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1122 00:28:16.677755  185312 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.717037  185312 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1122 00:28:16.717153  185312 start.go:472] detecting cgroup driver to use...
	I1122 00:28:16.717190  185312 detect.go:199] detected "systemd" cgroup driver on host os
	I1122 00:28:16.717246  185312 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:16.733092  185312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:16.745174  185312 docker.go:203] disabling cri-docker service (if available) ...
	I1122 00:28:16.745230  185312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:16.761994  185312 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:16.776022  185312 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:16.850802  185312 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:16.938709  185312 docker.go:219] disabling docker service ...
	I1122 00:28:16.938780  185312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:16.956073  185312 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:16.967453  185312 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:17.052851  185312 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:17.239426  185312 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:17.250069  185312 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:17.265874  185312 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:28:17.265921  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.481371  185312 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:17.481442  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.609516  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.738996  185312 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.868563  185312 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:17.877749  185312 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:17.886330  185312 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:17.894986  185312 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:17.956091  185312 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:18.717486  185312 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:18.717550  185312 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:18.721890  185312 start.go:540] Will wait 60s for crictl version
	I1122 00:28:18.721938  185312 ssh_runner.go:195] Run: which crictl
	I1122 00:28:18.725402  185312 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 00:28:18.768134  185312 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1122 00:28:18.768210  185312 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.805436  185312 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.843978  185312 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1122 00:28:15.304250  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:15.338106  193202 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:15.338138  193202 ubuntu.go:177] setting up certificates
	I1122 00:28:15.338148  193202 provision.go:83] configureAuth start
	I1122 00:28:15.338303  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:15.357394  193202 provision.go:138] copyHostCerts
	I1122 00:28:15.357462  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:15.357472  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:15.357542  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:15.357636  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:15.357641  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:15.357676  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:15.357750  193202 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:15.357756  193202 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:15.357791  193202 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:15.357854  193202 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-670577 san=[192.168.103.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-670577]
	I1122 00:28:15.481830  193202 provision.go:172] copyRemoteCerts
	I1122 00:28:15.481885  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:15.481930  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.501019  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:15.587767  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:15.617857  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1122 00:28:15.641541  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:15.665449  193202 provision.go:86] duration metric: configureAuth took 327.289941ms
	I1122 00:28:15.665471  193202 ubuntu.go:193] setting minikube options for container-runtime
	I1122 00:28:15.665639  193202 config.go:182] Loaded profile config "running-upgrade-670577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:15.665753  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.685126  193202 main.go:141] libmachine: Using SSH client type: native
	I1122 00:28:15.685677  193202 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1122 00:28:15.685701  193202 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:15.921742  193202 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:15.921762  193202 machine.go:91] provisioned docker machine in 948.751439ms
	I1122 00:28:15.921770  193202 client.go:171] LocalClient.Create took 18.295485742s
	I1122 00:28:15.921788  193202 start.go:167] duration metric: libmachine.API.Create for "running-upgrade-670577" took 18.29553475s
	I1122 00:28:15.921797  193202 start.go:300] post-start starting for "running-upgrade-670577" (driver="docker")
	I1122 00:28:15.921830  193202 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:15.921897  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:15.921931  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:15.941172  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.034036  193202 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:16.037421  193202 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:16.037455  193202 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1122 00:28:16.037469  193202 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1122 00:28:16.037477  193202 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1122 00:28:16.037488  193202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:16.037541  193202 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:16.037659  193202 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:16.037794  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:16.047663  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:16.075361  193202 start.go:303] post-start completed in 153.548208ms
	I1122 00:28:16.075717  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:16.094079  193202 profile.go:148] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/config.json ...
	I1122 00:28:16.094307  193202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:16.094349  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.112074  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.196810  193202 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:16.201764  193202 start.go:128] duration metric: createHost completed in 18.577925614s
	I1122 00:28:16.201781  193202 start.go:83] releasing machines lock for "running-upgrade-670577", held for 18.578076677s
	I1122 00:28:16.201846  193202 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-670577
	I1122 00:28:16.221146  193202 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:16.221201  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.221218  193202 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:16.221284  193202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-670577
	I1122 00:28:16.239354  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.241004  193202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/running-upgrade-670577/id_rsa Username:docker}
	I1122 00:28:16.422397  193202 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:16.426985  193202 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:16.569377  193202 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1122 00:28:16.573976  193202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.596453  193202 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1122 00:28:16.596543  193202 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:16.626448  193202 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1122 00:28:16.626464  193202 start.go:472] detecting cgroup driver to use...
	I1122 00:28:16.626491  193202 detect.go:199] detected "systemd" cgroup driver on host os
	I1122 00:28:16.626529  193202 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:16.640927  193202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:16.653473  193202 docker.go:203] disabling cri-docker service (if available) ...
	I1122 00:28:16.653509  193202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:16.667453  193202 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:16.684318  193202 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:16.765141  193202 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:16.860190  193202 docker.go:219] disabling docker service ...
	I1122 00:28:16.860271  193202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:16.878796  193202 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:16.894745  193202 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:16.976935  193202 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:17.112718  193202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:17.123772  193202 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:17.139685  193202 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:28:17.139733  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.221652  193202 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:17.221702  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.271317  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.353026  193202 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:17.480444  193202 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:17.490385  193202 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:17.498417  193202 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:17.506613  193202 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:17.621809  193202 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:18.718269  193202 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.096428716s)
	I1122 00:28:18.718289  193202 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:18.718338  193202 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:18.722652  193202 start.go:540] Will wait 60s for crictl version
	I1122 00:28:18.722699  193202 ssh_runner.go:195] Run: which crictl
	I1122 00:28:18.727317  193202 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1122 00:28:18.768419  193202 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1122 00:28:18.768499  193202 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.807650  193202 ssh_runner.go:195] Run: crio --version
	I1122 00:28:18.851617  193202 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1122 00:28:18.853115  193202 cli_runner.go:164] Run: docker network inspect running-upgrade-670577 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:18.874951  193202 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:18.878607  193202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:18.893506  193202 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:18.893558  193202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.961843  193202 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.961861  193202 crio.go:415] Images already preloaded, skipping extraction
	I1122 00:28:18.961913  193202 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:19.003827  193202 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:19.003844  193202 cache_images.go:84] Images are preloaded, skipping loading
	I1122 00:28:19.003918  193202 ssh_runner.go:195] Run: crio config
	I1122 00:28:19.053271  193202 cni.go:84] Creating CNI manager for ""
	I1122 00:28:19.053284  193202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:19.053317  193202 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:19.053341  193202 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-670577 NodeName:running-upgrade-670577 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:19.053513  193202 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-670577"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:19.053604  193202 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=running-upgrade-670577 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-670577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1122 00:28:19.053654  193202 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1122 00:28:19.063988  193202 binaries.go:44] Found k8s binaries, skipping transfer
	I1122 00:28:19.064087  193202 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:19.074514  193202 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1122 00:28:19.095036  193202 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:19.118147  193202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1122 00:28:19.136905  193202 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:19.140397  193202 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:19.151664  193202 certs.go:56] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577 for IP: 192.168.103.2
	I1122 00:28:19.151689  193202 certs.go:190] acquiring lock for shared ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.151832  193202 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:19.151869  193202 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:19.151912  193202 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key
	I1122 00:28:19.151923  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt with IP's: []
	I1122 00:28:19.217545  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt ...
	I1122 00:28:19.217566  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.crt: {Name:mk0568809c62747eabcee3b5df3b589cf6fb0169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.217722  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key ...
	I1122 00:28:19.217735  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/client.key: {Name:mk0e843cac96c166ac471d288e9c151cc4549ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.217842  193202 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9
	I1122 00:28:19.217856  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 with IP's: [192.168.103.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1122 00:28:19.526799  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 ...
	I1122 00:28:19.526821  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9: {Name:mkea39b5805b5daad08075e952721792601e3653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.526990  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9 ...
	I1122 00:28:19.527001  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9: {Name:mk8646baab87d81020e45de4236fa34270adfcca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.527100  193202 certs.go:337] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt.33fce0b9 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt
	I1122 00:28:19.527184  193202 certs.go:341] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key.33fce0b9 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key
	I1122 00:28:19.527254  193202 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key
	I1122 00:28:19.527267  193202 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt with IP's: []
	I1122 00:28:19.684940  193202 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt ...
	I1122 00:28:19.684955  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt: {Name:mk73aa95d4504bbc9fff1c6e3fabc5ce76da1fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.685135  193202 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key ...
	I1122 00:28:19.685147  193202 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key: {Name:mk4e5b535951d5e2d8af70a3478357056fcca5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.685340  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:19.685372  193202 certs.go:433] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:19.685381  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:19.685401  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:19.685426  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:19.685445  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:19.685481  193202 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:19.686202  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1122 00:28:19.713127  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:28:19.736820  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:19.758842  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/running-upgrade-670577/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:19.781168  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:19.804736  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:19.828293  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:19.851526  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:19.874651  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:19.900064  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:19.922429  193202 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:19.945126  193202 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:19.962418  193202 ssh_runner.go:195] Run: openssl version
	I1122 00:28:19.967689  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:19.976675  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.980192  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.980246  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:19.986438  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:19.996497  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:20.006154  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.010141  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.010186  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.017222  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:20.026654  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:20.036525  193202 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.040088  193202 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.040123  193202 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.046813  193202 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:20.056066  193202 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1122 00:28:20.059320  193202 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1122 00:28:20.059371  193202 kubeadm.go:404] StartCluster: {Name:running-upgrade-670577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:running-upgrade-670577 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 00:28:20.059429  193202 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:20.059464  193202 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:20.094682  193202 cri.go:89] found id: ""
	I1122 00:28:20.094739  193202 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:20.103412  193202 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:20.111959  193202 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:20.111992  193202 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:20.120800  193202 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:20.120843  193202 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:20.167818  193202 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1122 00:28:20.167897  193202 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 00:28:20.206497  193202 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:20.206581  193202 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:20.206642  193202 kubeadm.go:322] OS: Linux
	I1122 00:28:20.206716  193202 kubeadm.go:322] CGROUPS_CPU: enabled
	I1122 00:28:20.206794  193202 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1122 00:28:20.206855  193202 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1122 00:28:20.206913  193202 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1122 00:28:20.206965  193202 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1122 00:28:20.207024  193202 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1122 00:28:20.207141  193202 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1122 00:28:20.207217  193202 kubeadm.go:322] CGROUPS_IO: enabled
	I1122 00:28:20.279422  193202 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:20.279547  193202 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:20.279661  193202 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:28:20.492125  193202 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:18.845089  185312 cli_runner.go:164] Run: docker network inspect stopped-upgrade-220412 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:18.869350  185312 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:18.874280  185312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:18.888746  185312 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1122 00:28:18.888792  185312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.954663  185312 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.954682  185312 crio.go:415] Images already preloaded, skipping extraction
	I1122 00:28:18.954742  185312 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:18.991653  185312 crio.go:496] all images are preloaded for cri-o runtime.
	I1122 00:28:18.991671  185312 cache_images.go:84] Images are preloaded, skipping loading
	I1122 00:28:18.991750  185312 ssh_runner.go:195] Run: crio config
	I1122 00:28:19.041882  185312 cni.go:84] Creating CNI manager for ""
	I1122 00:28:19.041901  185312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:19.041927  185312 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:19.041956  185312 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-220412 NodeName:stopped-upgrade-220412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:19.042190  185312 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-220412"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:19.042267  185312 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=stopped-upgrade-220412 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-220412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1122 00:28:19.042331  185312 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1122 00:28:19.053512  185312 binaries.go:44] Found k8s binaries, skipping transfer
	I1122 00:28:19.053589  185312 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:19.065212  185312 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1122 00:28:19.085573  185312 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:19.107228  185312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1122 00:28:19.127869  185312 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:19.131705  185312 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:19.143335  185312 certs.go:56] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412 for IP: 192.168.85.2
	I1122 00:28:19.143373  185312 certs.go:190] acquiring lock for shared ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.143515  185312 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:19.143555  185312 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:19.143598  185312 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key
	I1122 00:28:19.143606  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt with IP's: []
	I1122 00:28:19.366878  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt ...
	I1122 00:28:19.366899  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.crt: {Name:mkc01826b8e32a27122ca93da8cd3c152feb840d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.367091  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key ...
	I1122 00:28:19.367109  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/client.key: {Name:mk3a66720974ad7ca5234fbaf14f5e4b5ab3e9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.367248  185312 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c
	I1122 00:28:19.367265  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1122 00:28:19.689412  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c ...
	I1122 00:28:19.689429  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c: {Name:mk040457a1f8472f1aff6b3c60c98a44e3ddbb7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.689565  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c ...
	I1122 00:28:19.689572  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c: {Name:mk5719680b501ff8240186054bcec85fe6401669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.689641  185312 certs.go:337] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt
	I1122 00:28:19.689718  185312 certs.go:341] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key
	I1122 00:28:19.689775  185312 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key
	I1122 00:28:19.689784  185312 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt with IP's: []
	I1122 00:28:19.995979  185312 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt ...
	I1122 00:28:19.995996  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt: {Name:mkae622f1fc37f803c94532aca3594461c7cfc0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.996155  185312 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key ...
	I1122 00:28:19.996165  185312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key: {Name:mkec79bc8ddf3f4f154d9d1d1d0ce16ee6f442a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:19.996381  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:19.996413  185312 certs.go:433] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:19.996429  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:19.996469  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:19.996499  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:19.996534  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:19.996598  185312 certs.go:437] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:19.997546  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1122 00:28:20.022565  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:28:20.046220  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:20.070274  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/stopped-upgrade-220412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:28:20.093694  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:20.117433  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:20.140895  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:20.167281  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:20.192157  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:20.222940  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:20.248844  185312 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:20.275248  185312 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:20.294848  185312 ssh_runner.go:195] Run: openssl version
	I1122 00:28:20.301114  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:20.311571  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.314987  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.315040  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:20.322410  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:20.332162  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:20.341363  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.344573  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.344617  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:20.351349  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:20.362115  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:20.371887  185312 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.376383  185312 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.376426  185312 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:20.383952  185312 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:20.394773  185312 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1122 00:28:20.398375  185312 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1122 00:28:20.398427  185312 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-220412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:stopped-upgrade-220412 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1122 00:28:20.398535  185312 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:20.398602  185312 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:20.436858  185312 cri.go:89] found id: ""
	I1122 00:28:20.436913  185312 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:20.447827  185312 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:20.457822  185312 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:20.457872  185312 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:20.468073  185312 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:20.468116  185312 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:20.516590  185312 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1122 00:28:20.516655  185312 kubeadm.go:322] [preflight] Running pre-flight checks
	I1122 00:28:20.553599  185312 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:20.553680  185312 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:20.553724  185312 kubeadm.go:322] OS: Linux
	I1122 00:28:20.553829  185312 kubeadm.go:322] CGROUPS_CPU: enabled
	I1122 00:28:20.553900  185312 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1122 00:28:20.553969  185312 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1122 00:28:20.554067  185312 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1122 00:28:20.554141  185312 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1122 00:28:20.554199  185312 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1122 00:28:20.554265  185312 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1122 00:28:20.554320  185312 kubeadm.go:322] CGROUPS_IO: enabled
	I1122 00:28:20.630484  185312 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:20.630623  185312 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:20.630749  185312 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:28:20.840674  185312 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:18.641179  194936 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v cert-expiration-624739:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.662717409s)
	I1122 00:28:18.641205  194936 kic.go:203] duration metric: took 4.662864572s to extract preloaded images to volume ...
	W1122 00:28:18.641294  194936 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:28:18.641328  194936 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:28:18.641371  194936 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:28:18.704152  194936 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-624739 --name cert-expiration-624739 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-624739 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-624739 --network cert-expiration-624739 --ip 192.168.94.2 --volume cert-expiration-624739:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:28:19.014071  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Running}}
	I1122 00:28:19.034810  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.055668  194936 cli_runner.go:164] Run: docker exec cert-expiration-624739 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:28:19.108344  194936 oci.go:144] the created container "cert-expiration-624739" has a running status.
	I1122 00:28:19.108367  194936 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa...
	I1122 00:28:19.195829  194936 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:28:19.221855  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.245113  194936 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:28:19.245128  194936 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-624739 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:28:19.292506  194936 cli_runner.go:164] Run: docker container inspect cert-expiration-624739 --format={{.State.Status}}
	I1122 00:28:19.323417  194936 machine.go:94] provisionDockerMachine start ...
	I1122 00:28:19.323535  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.347713  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.348012  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.348025  194936 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:28:19.477870  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-624739
	
	I1122 00:28:19.477889  194936 ubuntu.go:182] provisioning hostname "cert-expiration-624739"
	I1122 00:28:19.477962  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.497665  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.497991  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.498001  194936 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-624739 && echo "cert-expiration-624739" | sudo tee /etc/hostname
	I1122 00:28:19.630923  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-624739
	
	I1122 00:28:19.630984  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.650270  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.650584  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.650605  194936 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-624739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-624739/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-624739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:28:19.775006  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:28:19.775034  194936 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:28:19.775077  194936 ubuntu.go:190] setting up certificates
	I1122 00:28:19.775089  194936 provision.go:84] configureAuth start
	I1122 00:28:19.775153  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:19.795101  194936 provision.go:143] copyHostCerts
	I1122 00:28:19.795155  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:28:19.795162  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:28:19.795222  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:28:19.795305  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:28:19.795309  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:28:19.795334  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:28:19.795395  194936 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:28:19.795398  194936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:28:19.795420  194936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:28:19.795483  194936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-624739 san=[127.0.0.1 192.168.94.2 cert-expiration-624739 localhost minikube]
	I1122 00:28:19.813066  194936 provision.go:177] copyRemoteCerts
	I1122 00:28:19.813113  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:28:19.813146  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.832106  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:19.921489  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:28:19.939948  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:28:19.956793  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:28:19.973543  194936 provision.go:87] duration metric: took 198.442382ms to configureAuth
	I1122 00:28:19.973563  194936 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:28:19.973702  194936 config.go:182] Loaded profile config "cert-expiration-624739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:28:19.973794  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:19.993193  194936 main.go:143] libmachine: Using SSH client type: native
	I1122 00:28:19.993415  194936 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1122 00:28:19.993425  194936 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:28:20.259589  194936 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:28:20.259612  194936 machine.go:97] duration metric: took 936.1766ms to provisionDockerMachine
	I1122 00:28:20.259623  194936 client.go:176] duration metric: took 12.131508346s to LocalClient.Create
	I1122 00:28:20.259646  194936 start.go:167] duration metric: took 12.131589366s to libmachine.API.Create "cert-expiration-624739"
	I1122 00:28:20.259654  194936 start.go:293] postStartSetup for "cert-expiration-624739" (driver="docker")
	I1122 00:28:20.259665  194936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:28:20.259733  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:28:20.259777  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.278425  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.371432  194936 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:28:20.375627  194936 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:28:20.375666  194936 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:28:20.375677  194936 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:28:20.375722  194936 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:28:20.375785  194936 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:28:20.375860  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:28:20.383761  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:20.405391  194936 start.go:296] duration metric: took 145.723028ms for postStartSetup
	I1122 00:28:20.405695  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:20.424915  194936 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/config.json ...
	I1122 00:28:20.425251  194936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:28:20.425304  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.446515  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.535850  194936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:28:20.540871  194936 start.go:128] duration metric: took 12.418886782s to createHost
	I1122 00:28:20.540890  194936 start.go:83] releasing machines lock for "cert-expiration-624739", held for 12.41901585s
	I1122 00:28:20.540955  194936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-624739
	I1122 00:28:20.561200  194936 ssh_runner.go:195] Run: cat /version.json
	I1122 00:28:20.561257  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.561417  194936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:28:20.561481  194936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-624739
	I1122 00:28:20.580460  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.580721  194936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32998 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/cert-expiration-624739/id_rsa Username:docker}
	I1122 00:28:20.730552  194936 ssh_runner.go:195] Run: systemctl --version
	I1122 00:28:20.738131  194936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:28:20.775485  194936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:28:20.780019  194936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:28:20.780098  194936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:28:20.807306  194936 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:28:20.807327  194936 start.go:496] detecting cgroup driver to use...
	I1122 00:28:20.807372  194936 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:28:20.807423  194936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:28:20.826174  194936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:28:20.839027  194936 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:28:20.839093  194936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:28:20.858108  194936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:28:20.877974  194936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:28:20.968403  194936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:28:21.056158  194936 docker.go:234] disabling docker service ...
	I1122 00:28:21.056229  194936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:28:21.072744  194936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:28:21.083886  194936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:28:21.170481  194936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:28:21.255944  194936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:28:21.267070  194936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:28:21.279921  194936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:28:21.279961  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.288986  194936 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:28:21.289034  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.296973  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.304624  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.312412  194936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:28:21.319404  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.326859  194936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.339107  194936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:28:21.346995  194936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:28:21.353655  194936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:28:21.360320  194936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:21.440218  194936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:28:21.585342  194936 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:28:21.585404  194936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:28:21.589223  194936 start.go:564] Will wait 60s for crictl version
	I1122 00:28:21.589274  194936 ssh_runner.go:195] Run: which crictl
	I1122 00:28:21.592716  194936 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:28:21.617315  194936 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:28:21.617369  194936 ssh_runner.go:195] Run: crio --version
	I1122 00:28:21.643829  194936 ssh_runner.go:195] Run: crio --version
	I1122 00:28:21.670343  194936 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:28:20.842149  185312 out.go:204]   - Generating certificates and keys ...
	I1122 00:28:20.842289  185312 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 00:28:20.842404  185312 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:20.982146  185312 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:21.088953  185312 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:21.143934  185312 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:21.278379  185312 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:21.410709  185312 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:21.410961  185312 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-220412] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:28:21.464228  185312 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:21.464404  185312 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-220412] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:28:21.633646  185312 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:21.853031  185312 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:21.931523  185312 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1122 00:28:21.931646  185312 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:22.064948  185312 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:22.271503  185312 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:22.390578  185312 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:22.480302  185312 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:22.481342  185312 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:22.485563  185312 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:28:21.671319  194936 cli_runner.go:164] Run: docker network inspect cert-expiration-624739 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:28:21.687352  194936 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:28:21.691636  194936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:21.701450  194936 kubeadm.go:884] updating cluster {Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:28:21.701561  194936 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:28:21.701600  194936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:21.733923  194936 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:21.733934  194936 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:28:21.733969  194936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:28:21.757143  194936 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:28:21.757153  194936 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:28:21.757160  194936 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:28:21.757232  194936 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-624739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:28:21.757282  194936 ssh_runner.go:195] Run: crio config
	I1122 00:28:21.800716  194936 cni.go:84] Creating CNI manager for ""
	I1122 00:28:21.800733  194936 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:21.800752  194936 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:28:21.800781  194936 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-624739 NodeName:cert-expiration-624739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:28:21.800978  194936 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-624739"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:28:21.801036  194936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:28:21.808647  194936 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:28:21.808692  194936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:28:21.815866  194936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1122 00:28:21.827900  194936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:28:21.841922  194936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1122 00:28:21.853838  194936 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:28:21.857356  194936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:28:21.866364  194936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:28:21.947982  194936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:28:21.972191  194936 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739 for IP: 192.168.94.2
	I1122 00:28:21.972202  194936 certs.go:195] generating shared ca certs ...
	I1122 00:28:21.972221  194936 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:21.972387  194936 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:28:21.972442  194936 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:28:21.972458  194936 certs.go:257] generating profile certs ...
	I1122 00:28:21.972525  194936 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key
	I1122 00:28:21.972541  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt with IP's: []
	I1122 00:28:22.041901  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt ...
	I1122 00:28:22.041916  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt: {Name:mk3b7c1e754514b6aa3a7dcb39f458a3b77ce55c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.042100  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key ...
	I1122 00:28:22.042112  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key: {Name:mkcf525315559201b56fd3af0512e3f0d2a182ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.042229  194936 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42
	I1122 00:28:22.042241  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:28:22.104922  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 ...
	I1122 00:28:22.104936  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42: {Name:mkb28c8ef76a0486afddef46c0acac97eb13ee5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.105108  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42 ...
	I1122 00:28:22.105120  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42: {Name:mk2afd0a61b72c7ea831f79e1e40034b8cfc73e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.105235  194936 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt.1f3c2f42 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt
	I1122 00:28:22.105308  194936 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key.1f3c2f42 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key
	I1122 00:28:22.105356  194936 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key
	I1122 00:28:22.105366  194936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt with IP's: []
	I1122 00:28:22.288492  194936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt ...
	I1122 00:28:22.288511  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt: {Name:mke2855da8ddde5c4bd9293af3879dd3cf44e877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.288679  194936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key ...
	I1122 00:28:22.288692  194936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key: {Name:mk6108117b9a5009d8a44f656395739341df96ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:22.288914  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:28:22.288957  194936 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:28:22.288966  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:28:22.289006  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:28:22.289034  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:28:22.289075  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:28:22.289129  194936 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:28:22.289949  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:28:22.307500  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:28:22.323699  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:28:22.339719  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:28:22.355795  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:28:22.371557  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:28:22.387505  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:28:22.403631  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:28:22.420484  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:28:22.438075  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:28:22.453994  194936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:28:22.469748  194936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:28:22.481365  194936 ssh_runner.go:195] Run: openssl version
	I1122 00:28:22.487652  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:28:22.496233  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.500368  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.500407  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:28:22.539696  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:28:22.547869  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:28:22.555463  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.558758  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.558799  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:28:22.595134  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:28:22.604226  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:28:22.612853  194936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.616304  194936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.616346  194936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:28:22.656851  194936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:28:22.665513  194936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:28:22.669103  194936 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:28:22.669163  194936 kubeadm.go:401] StartCluster: {Name:cert-expiration-624739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-624739 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:28:22.669258  194936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:28:22.669304  194936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:28:22.698207  194936 cri.go:89] found id: ""
	I1122 00:28:22.698256  194936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:28:22.705638  194936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:28:22.713150  194936 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:28:22.713189  194936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:28:22.720573  194936 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:28:22.720581  194936 kubeadm.go:158] found existing configuration files:
	
	I1122 00:28:22.720620  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:28:22.727530  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:28:22.727584  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:28:22.734196  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:28:22.741485  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:28:22.741522  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:28:22.748038  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:28:22.754923  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:28:22.754960  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:28:22.761616  194936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:28:22.768521  194936 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:28:22.768563  194936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:28:22.775163  194936 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:28:22.812852  194936 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:28:22.812934  194936 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:28:22.834952  194936 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:28:22.835037  194936 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:28:22.835102  194936 kubeadm.go:319] OS: Linux
	I1122 00:28:22.835161  194936 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:28:22.835231  194936 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:28:22.835297  194936 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:28:22.835373  194936 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:28:22.835418  194936 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:28:22.835460  194936 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:28:22.835503  194936 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:28:22.835538  194936 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:28:22.892839  194936 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:28:22.892975  194936 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:28:22.893129  194936 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:28:22.900285  194936 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:28:20.493898  193202 out.go:204]   - Generating certificates and keys ...
	I1122 00:28:20.494010  193202 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1122 00:28:20.494139  193202 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:20.678304  193202 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:20.969681  193202 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:21.066893  193202 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:21.183993  193202 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:21.790721  193202 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:21.790931  193202 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-670577] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:28:21.958958  193202 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:21.959178  193202 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-670577] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:28:22.205352  193202 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:22.488434  193202 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:22.617598  193202 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1122 00:28:22.617713  193202 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:22.907519  193202 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:23.025159  193202 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:23.216350  193202 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:23.406949  193202 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:23.407496  193202 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:23.410752  193202 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1122 00:28:20.007848  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	W1122 00:28:22.507729  193714 node_ready.go:57] node "pause-044220" has "Ready":"False" status (will retry)
	I1122 00:28:23.508711  193714 node_ready.go:49] node "pause-044220" is "Ready"
	I1122 00:28:23.508743  193714 node_ready.go:38] duration metric: took 8.004384383s for node "pause-044220" to be "Ready" ...
	I1122 00:28:23.508761  193714 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:28:23.508808  193714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:28:23.524441  193714 api_server.go:72] duration metric: took 8.167587959s to wait for apiserver process to appear ...
	I1122 00:28:23.524472  193714 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:28:23.524494  193714 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:28:23.532635  193714 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:28:23.533975  193714 api_server.go:141] control plane version: v1.34.1
	I1122 00:28:23.534003  193714 api_server.go:131] duration metric: took 9.523798ms to wait for apiserver health ...
	I1122 00:28:23.534014  193714 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:28:23.539644  193714 system_pods.go:59] 7 kube-system pods found
	I1122 00:28:23.539692  193714 system_pods.go:61] "coredns-66bc5c9577-c46n9" [4bf35b5e-3d40-4906-bab5-bb9d0c469a5a] Running
	I1122 00:28:23.539704  193714 system_pods.go:61] "etcd-pause-044220" [3bab319e-1c57-4d25-a674-2c8937af44d1] Running
	I1122 00:28:23.539741  193714 system_pods.go:61] "kindnet-6vbjb" [f6763d92-62c4-408f-b9da-9cfc56ce9326] Running
	I1122 00:28:23.539752  193714 system_pods.go:61] "kube-apiserver-pause-044220" [9930ef77-4401-43fb-912d-b571f3336177] Running
	I1122 00:28:23.539757  193714 system_pods.go:61] "kube-controller-manager-pause-044220" [3a41811c-4cab-4abb-b1ba-e8b21ecb6050] Running
	I1122 00:28:23.539762  193714 system_pods.go:61] "kube-proxy-lpz2b" [280f135b-a7a5-4abd-b233-b03ad2e60a2f] Running
	I1122 00:28:23.539767  193714 system_pods.go:61] "kube-scheduler-pause-044220" [041fc1ef-3cd7-41a0-b7d3-c7215a087516] Running
	I1122 00:28:23.539776  193714 system_pods.go:74] duration metric: took 5.754011ms to wait for pod list to return data ...
	I1122 00:28:23.539785  193714 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:28:23.542778  193714 default_sa.go:45] found service account: "default"
	I1122 00:28:23.542799  193714 default_sa.go:55] duration metric: took 3.006689ms for default service account to be created ...
	I1122 00:28:23.542810  193714 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:28:23.547157  193714 system_pods.go:86] 7 kube-system pods found
	I1122 00:28:23.547183  193714 system_pods.go:89] "coredns-66bc5c9577-c46n9" [4bf35b5e-3d40-4906-bab5-bb9d0c469a5a] Running
	I1122 00:28:23.547190  193714 system_pods.go:89] "etcd-pause-044220" [3bab319e-1c57-4d25-a674-2c8937af44d1] Running
	I1122 00:28:23.547195  193714 system_pods.go:89] "kindnet-6vbjb" [f6763d92-62c4-408f-b9da-9cfc56ce9326] Running
	I1122 00:28:23.547202  193714 system_pods.go:89] "kube-apiserver-pause-044220" [9930ef77-4401-43fb-912d-b571f3336177] Running
	I1122 00:28:23.547208  193714 system_pods.go:89] "kube-controller-manager-pause-044220" [3a41811c-4cab-4abb-b1ba-e8b21ecb6050] Running
	I1122 00:28:23.547213  193714 system_pods.go:89] "kube-proxy-lpz2b" [280f135b-a7a5-4abd-b233-b03ad2e60a2f] Running
	I1122 00:28:23.547218  193714 system_pods.go:89] "kube-scheduler-pause-044220" [041fc1ef-3cd7-41a0-b7d3-c7215a087516] Running
	I1122 00:28:23.547227  193714 system_pods.go:126] duration metric: took 4.410206ms to wait for k8s-apps to be running ...
	I1122 00:28:23.547236  193714 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:28:23.547283  193714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:28:23.565366  193714 system_svc.go:56] duration metric: took 18.119197ms WaitForService to wait for kubelet
	I1122 00:28:23.565454  193714 kubeadm.go:587] duration metric: took 8.208605629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:28:23.565493  193714 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:28:23.568906  193714 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:28:23.569007  193714 node_conditions.go:123] node cpu capacity is 8
	I1122 00:28:23.569044  193714 node_conditions.go:105] duration metric: took 3.508228ms to run NodePressure ...
	I1122 00:28:23.569077  193714 start.go:242] waiting for startup goroutines ...
	I1122 00:28:23.569087  193714 start.go:247] waiting for cluster config update ...
	I1122 00:28:23.569102  193714 start.go:256] writing updated cluster config ...
	I1122 00:28:23.569414  193714 ssh_runner.go:195] Run: rm -f paused
	I1122 00:28:23.573674  193714 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:28:23.574284  193714 kapi.go:59] client config for pause-044220: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/profiles/pause-044220/client.key", CAFile:"/home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:28:23.577259  193714 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c46n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.583580  193714 pod_ready.go:94] pod "coredns-66bc5c9577-c46n9" is "Ready"
	I1122 00:28:23.583604  193714 pod_ready.go:86] duration metric: took 6.322847ms for pod "coredns-66bc5c9577-c46n9" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.585886  193714 pod_ready.go:83] waiting for pod "etcd-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.591597  193714 pod_ready.go:94] pod "etcd-pause-044220" is "Ready"
	I1122 00:28:23.591622  193714 pod_ready.go:86] duration metric: took 5.714092ms for pod "etcd-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.593980  193714 pod_ready.go:83] waiting for pod "kube-apiserver-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.598902  193714 pod_ready.go:94] pod "kube-apiserver-pause-044220" is "Ready"
	I1122 00:28:23.598919  193714 pod_ready.go:86] duration metric: took 4.915662ms for pod "kube-apiserver-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.601147  193714 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:23.977636  193714 pod_ready.go:94] pod "kube-controller-manager-pause-044220" is "Ready"
	I1122 00:28:23.977666  193714 pod_ready.go:86] duration metric: took 376.499583ms for pod "kube-controller-manager-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.178769  193714 pod_ready.go:83] waiting for pod "kube-proxy-lpz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.577681  193714 pod_ready.go:94] pod "kube-proxy-lpz2b" is "Ready"
	I1122 00:28:24.577709  193714 pod_ready.go:86] duration metric: took 398.913529ms for pod "kube-proxy-lpz2b" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:24.777687  193714 pod_ready.go:83] waiting for pod "kube-scheduler-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:25.178008  193714 pod_ready.go:94] pod "kube-scheduler-pause-044220" is "Ready"
	I1122 00:28:25.178038  193714 pod_ready.go:86] duration metric: took 400.328668ms for pod "kube-scheduler-pause-044220" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:28:25.178075  193714 pod_ready.go:40] duration metric: took 1.604346914s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:28:25.243596  193714 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:28:25.246167  193714 out.go:179] * Done! kubectl is now configured to use "pause-044220" cluster and "default" namespace by default
	I1122 00:28:23.412063  193202 out.go:204]   - Booting up control plane ...
	I1122 00:28:23.412234  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:23.412335  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:23.413209  193202 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:23.424254  193202 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:23.425108  193202 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:23.425210  193202 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 00:28:23.507941  193202 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:28:22.486979  185312 out.go:204]   - Booting up control plane ...
	I1122 00:28:22.487121  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:22.487218  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:22.488240  185312 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:22.497461  185312 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:22.498397  185312 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:22.498451  185312 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1122 00:28:22.573433  185312 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:28:22.901515  194936 out.go:252]   - Generating certificates and keys ...
	I1122 00:28:22.901605  194936 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:28:22.901709  194936 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:28:23.059824  194936 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:28:23.214515  194936 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:28:23.680992  194936 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:28:23.890747  194936 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:28:24.193141  194936 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:28:24.193335  194936 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-624739 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:28:24.452194  194936 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:28:24.452462  194936 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-624739 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:28:25.038420  194936 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:28:25.230401  194936 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:28:25.279825  194936 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:28:25.279911  194936 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:28:25.536418  194936 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:28:25.685008  194936 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:28:26.090469  194936 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:28:26.756010  194936 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:28:27.283336  194936 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:28:27.284330  194936 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:28:27.287624  194936 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:28:27.289689  194936 out.go:252]   - Booting up control plane ...
	I1122 00:28:27.289769  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:28:27.289837  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:28:27.289896  194936 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:28:27.302690  194936 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:28:27.302833  194936 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:28:27.309368  194936 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:28:27.309635  194936 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:28:27.309693  194936 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:28:27.404942  194936 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:28:27.405143  194936 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:28:27.575569  185312 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002225 seconds
	I1122 00:28:27.575734  185312 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:28:27.586290  185312 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:28:28.109793  185312 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:28:28.110105  185312 kubeadm.go:322] [mark-control-plane] Marking the node stopped-upgrade-220412 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:28:28.622688  185312 kubeadm.go:322] [bootstrap-token] Using token: d3qt8d.811oabqzpdiiofta
	I1122 00:28:28.624420  185312 out.go:204]   - Configuring RBAC rules ...
	I1122 00:28:28.624592  185312 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:28:28.629810  185312 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:28:28.637546  185312 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:28:28.641818  185312 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:28:28.644725  185312 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:28:28.647820  185312 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:28:28.660255  185312 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:28:28.888249  185312 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1122 00:28:29.034982  185312 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1122 00:28:29.035943  185312 kubeadm.go:322] 
	I1122 00:28:29.036025  185312 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1122 00:28:29.036031  185312 kubeadm.go:322] 
	I1122 00:28:29.036177  185312 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1122 00:28:29.036194  185312 kubeadm.go:322] 
	I1122 00:28:29.036288  185312 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1122 00:28:29.036382  185312 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:28:29.036470  185312 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:28:29.036475  185312 kubeadm.go:322] 
	I1122 00:28:29.036539  185312 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1122 00:28:29.036544  185312 kubeadm.go:322] 
	I1122 00:28:29.036621  185312 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:28:29.036630  185312 kubeadm.go:322] 
	I1122 00:28:29.036733  185312 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1122 00:28:29.036844  185312 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:28:29.036934  185312 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:28:29.036938  185312 kubeadm.go:322] 
	I1122 00:28:29.037033  185312 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:28:29.037165  185312 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1122 00:28:29.037172  185312 kubeadm.go:322] 
	I1122 00:28:29.037341  185312 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token d3qt8d.811oabqzpdiiofta \
	I1122 00:28:29.037487  185312 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:28:29.037513  185312 kubeadm.go:322] 	--control-plane 
	I1122 00:28:29.037518  185312 kubeadm.go:322] 
	I1122 00:28:29.037621  185312 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:28:29.037625  185312 kubeadm.go:322] 
	I1122 00:28:29.037725  185312 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token d3qt8d.811oabqzpdiiofta \
	I1122 00:28:29.037895  185312 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:28:29.043319  185312 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:28:29.043465  185312 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:28:29.043499  185312 cni.go:84] Creating CNI manager for ""
	I1122 00:28:29.043508  185312 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:29.045242  185312 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1122 00:28:28.509929  193202 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002049 seconds
	I1122 00:28:28.510105  193202 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:28:28.521855  193202 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:28:29.044683  193202 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:28:29.044923  193202 kubeadm.go:322] [mark-control-plane] Marking the node running-upgrade-670577 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:28:29.559140  193202 kubeadm.go:322] [bootstrap-token] Using token: rpkhfu.rw3uzklcsgreiivj
	I1122 00:28:29.560229  193202 out.go:204]   - Configuring RBAC rules ...
	I1122 00:28:29.560500  193202 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:28:29.567572  193202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:28:29.578610  193202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:28:29.582210  193202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:28:29.587035  193202 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:28:29.591041  193202 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:28:29.603325  193202 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:28:29.857035  193202 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1122 00:28:29.972031  193202 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1122 00:28:29.973422  193202 kubeadm.go:322] 
	I1122 00:28:29.973510  193202 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1122 00:28:29.973516  193202 kubeadm.go:322] 
	I1122 00:28:29.973630  193202 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1122 00:28:29.973641  193202 kubeadm.go:322] 
	I1122 00:28:29.973669  193202 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1122 00:28:29.973730  193202 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:28:29.973778  193202 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:28:29.973782  193202 kubeadm.go:322] 
	I1122 00:28:29.973835  193202 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1122 00:28:29.973839  193202 kubeadm.go:322] 
	I1122 00:28:29.973887  193202 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:28:29.973891  193202 kubeadm.go:322] 
	I1122 00:28:29.973946  193202 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1122 00:28:29.974026  193202 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:28:29.974126  193202 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:28:29.974133  193202 kubeadm.go:322] 
	I1122 00:28:29.974227  193202 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:28:29.974307  193202 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1122 00:28:29.974311  193202 kubeadm.go:322] 
	I1122 00:28:29.974398  193202 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token rpkhfu.rw3uzklcsgreiivj \
	I1122 00:28:29.974551  193202 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:28:29.974577  193202 kubeadm.go:322] 	--control-plane 
	I1122 00:28:29.974581  193202 kubeadm.go:322] 
	I1122 00:28:29.974682  193202 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:28:29.974688  193202 kubeadm.go:322] 
	I1122 00:28:29.974773  193202 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token rpkhfu.rw3uzklcsgreiivj \
	I1122 00:28:29.974932  193202 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:28:29.979525  193202 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:28:29.979676  193202 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:28:29.979700  193202 cni.go:84] Creating CNI manager for ""
	I1122 00:28:29.979708  193202 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:28:29.981772  193202 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1122 00:28:29.046487  185312 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:28:29.051638  185312 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1122 00:28:29.051660  185312 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1122 00:28:29.074734  185312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:28:30.011580  185312 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:28:30.011728  185312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:28:30.011820  185312 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=stopped-upgrade-220412 minikube.k8s.io/updated_at=2025_11_22T00_28_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:28:30.028775  185312 ops.go:34] apiserver oom_adj: -16
	I1122 00:28:30.150586  185312 kubeadm.go:1081] duration metric: took 138.909547ms to wait for elevateKubeSystemPrivileges.
	I1122 00:28:30.159589  185312 kubeadm.go:406] StartCluster complete in 9.761157459s
	I1122 00:28:30.159646  185312 settings.go:142] acquiring lock: {Name:mk85ab581b7684496f17a9f002a5be2718102560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:30.159833  185312 settings.go:150] Updating kubeconfig:  /tmp/legacy_kubeconfig3356098238
	I1122 00:28:30.160606  185312 lock.go:35] WriteFile acquiring /tmp/legacy_kubeconfig3356098238: {Name:mkd8a8d3192853f0473f36a9c78137c0ac3a5a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:28:30.160925  185312 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:28:30.161330  185312 config.go:182] Loaded profile config "stopped-upgrade-220412": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1122 00:28:30.161695  185312 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1122 00:28:30.161881  185312 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-220412"
	I1122 00:28:30.161911  185312 addons.go:231] Setting addon storage-provisioner=true in "stopped-upgrade-220412"
	I1122 00:28:30.162024  185312 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-220412"
	I1122 00:28:30.162041  185312 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-220412"
	I1122 00:28:30.162320  185312 host.go:66] Checking if "stopped-upgrade-220412" exists ...
	I1122 00:28:30.162946  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:30.163750  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:30.199242  185312 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:28:30.202682  185312 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:28:30.202696  185312 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:28:30.202760  185312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-220412
	I1122 00:28:30.202139  185312 addons.go:231] Setting addon default-storageclass=true in "stopped-upgrade-220412"
	I1122 00:28:30.202872  185312 host.go:66] Checking if "stopped-upgrade-220412" exists ...
	I1122 00:28:30.203488  185312 cli_runner.go:164] Run: docker container inspect stopped-upgrade-220412 --format={{.State.Status}}
	I1122 00:28:30.207223  185312 kapi.go:248] "coredns" deployment in "kube-system" namespace and "stopped-upgrade-220412" context rescaled to 1 replicas
	I1122 00:28:30.207276  185312 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:28:30.208497  185312 out.go:177] * Verifying Kubernetes components...
	I1122 00:28:29.983804  193202 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:28:29.989470  193202 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1122 00:28:29.989480  193202 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1122 00:28:30.019344  193202 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	
	
	==> CRI-O <==
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68015529Z" level=info msg="RDT not available in the host system"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.680171395Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68131596Z" level=info msg="Conmon does support the --sync option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.681343816Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68136232Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.682208087Z" level=info msg="Conmon does support the --sync option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68222725Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68778503Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.68781304Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.688704586Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.689228958Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.689300831Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.798776299Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-c46n9 Namespace:kube-system ID:e4ed7db197e377e4e2e094506d0ab4c6c05a4c1118a4b22aa7919bc00d18d078 UID:4bf35b5e-3d40-4906-bab5-bb9d0c469a5a NetNS:/var/run/netns/222719f3-8c23-4f61-b149-1dbf8729b62c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00039c540}] Aliases:map[]}"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799045315Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-c46n9 for CNI network kindnet (type=ptp)"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799711217Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799736929Z" level=info msg="Starting seccomp notifier watcher"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79979607Z" level=info msg="Create NRI interface"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79990969Z" level=info msg="built-in NRI default validator is disabled"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799925401Z" level=info msg="runtime interface created"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79994001Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79994832Z" level=info msg="runtime interface starting up..."
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.799955975Z" level=info msg="starting plugins..."
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.79997092Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 22 00:28:13 pause-044220 crio[2157]: time="2025-11-22T00:28:13.800384545Z" level=info msg="No systemd watchdog enabled"
	Nov 22 00:28:13 pause-044220 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e462ee7150306       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   30 seconds ago      Running             coredns                   0                   e4ed7db197e37       coredns-66bc5c9577-c46n9               kube-system
	f4bcdbc163f62       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   42 seconds ago      Running             kindnet-cni               0                   a5001a3132b92       kindnet-6vbjb                          kube-system
	0e4ad5609f787       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   42 seconds ago      Running             kube-proxy                0                   f99e2251ff6a4       kube-proxy-lpz2b                       kube-system
	3e14061fd4fcc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   53 seconds ago      Running             kube-scheduler            0                   d5209a6d6688f       kube-scheduler-pause-044220            kube-system
	ecabd39636370       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   53 seconds ago      Running             kube-controller-manager   0                   883b544295ec8       kube-controller-manager-pause-044220   kube-system
	79e36c09e9fdb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   53 seconds ago      Running             kube-apiserver            0                   66879fb501230       kube-apiserver-pause-044220            kube-system
	b4370898f997f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   53 seconds ago      Running             etcd                      0                   4e4fe2f00250b       etcd-pause-044220                      kube-system
	
	
	==> coredns [e462ee7150306df05ce83db1f3e0df58183280d43d04f1cb6d52feef9a3b7b3d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49709 - 8295 "HINFO IN 7893672033695571944.9016515455700446387. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.093652726s
	
	
	==> describe nodes <==
	Name:               pause-044220
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-044220
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=pause-044220
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_27_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:27:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-044220
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:28:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:27:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:28:23 +0000   Sat, 22 Nov 2025 00:28:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-044220
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                b5805957-782e-4cab-938a-26ad2cd52f0e
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-c46n9                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     44s
	  kube-system                 etcd-pause-044220                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         49s
	  kube-system                 kindnet-6vbjb                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      44s
	  kube-system                 kube-apiserver-pause-044220             250m (3%)     0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-controller-manager-pause-044220    200m (2%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-lpz2b                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-pause-044220             100m (1%)     0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 42s               kube-proxy       
	  Normal  Starting                 49s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  49s               kubelet          Node pause-044220 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s               kubelet          Node pause-044220 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s               kubelet          Node pause-044220 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s               node-controller  Node pause-044220 event: Registered Node pause-044220 in Controller
	  Normal  NodeNotReady             19s               kubelet          Node pause-044220 status is now: NodeNotReady
	  Normal  NodeReady                8s (x2 over 32s)  kubelet          Node pause-044220 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [b4370898f997f4499a567b711a03da7b4fec4f862abdda3f9d2f8bbdb7555955] <==
	{"level":"warn","ts":"2025-11-22T00:27:39.298095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.307800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.316476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:27:39.376117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36036","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:27:48.355299Z","caller":"traceutil/trace.go:172","msg":"trace[1742722197] linearizableReadLoop","detail":"{readStateIndex:361; appliedIndex:361; }","duration":"113.951812ms","start":"2025-11-22T00:27:48.241323Z","end":"2025-11-22T00:27:48.355275Z","steps":["trace[1742722197] 'read index received'  (duration: 113.939621ms)","trace[1742722197] 'applied index is now lower than readState.Index'  (duration: 11.201µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:48.355416Z","caller":"traceutil/trace.go:172","msg":"trace[2122983945] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"128.817561ms","start":"2025-11-22T00:27:48.226583Z","end":"2025-11-22T00:27:48.355401Z","steps":["trace[2122983945] 'process raft request'  (duration: 128.711343ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:27:48.355580Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.211923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-11-22T00:27:48.355632Z","caller":"traceutil/trace.go:172","msg":"trace[655372888] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:350; }","duration":"114.309768ms","start":"2025-11-22T00:27:48.241312Z","end":"2025-11-22T00:27:48.355622Z","steps":["trace[655372888] 'agreement among raft nodes before linearized reading'  (duration: 114.050454ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.516078Z","caller":"traceutil/trace.go:172","msg":"trace[1940888544] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"134.112516ms","start":"2025-11-22T00:27:48.381927Z","end":"2025-11-22T00:27:48.516040Z","steps":["trace[1940888544] 'process raft request'  (duration: 133.877542ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.516126Z","caller":"traceutil/trace.go:172","msg":"trace[971567281] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"135.71062ms","start":"2025-11-22T00:27:48.380396Z","end":"2025-11-22T00:27:48.516106Z","steps":["trace[971567281] 'process raft request'  (duration: 81.009186ms)","trace[971567281] 'compare'  (duration: 54.296647ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:48.880689Z","caller":"traceutil/trace.go:172","msg":"trace[1494015599] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"121.658683ms","start":"2025-11-22T00:27:48.759014Z","end":"2025-11-22T00:27:48.880673Z","steps":["trace[1494015599] 'process raft request'  (duration: 121.617537ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:48.880758Z","caller":"traceutil/trace.go:172","msg":"trace[1543492353] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"176.54715ms","start":"2025-11-22T00:27:48.704188Z","end":"2025-11-22T00:27:48.880735Z","steps":["trace[1543492353] 'process raft request'  (duration: 135.399665ms)","trace[1543492353] 'compare'  (duration: 40.924801ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:27:49.011605Z","caller":"traceutil/trace.go:172","msg":"trace[309240107] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"118.932143ms","start":"2025-11-22T00:27:48.892659Z","end":"2025-11-22T00:27:49.011592Z","steps":["trace[309240107] 'process raft request'  (duration: 118.842218ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:49.011603Z","caller":"traceutil/trace.go:172","msg":"trace[1081551917] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"122.622019ms","start":"2025-11-22T00:27:48.888914Z","end":"2025-11-22T00:27:49.011536Z","steps":["trace[1081551917] 'process raft request'  (duration: 116.11108ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:27:59.942992Z","caller":"traceutil/trace.go:172","msg":"trace[945982709] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"164.339708ms","start":"2025-11-22T00:27:59.778627Z","end":"2025-11-22T00:27:59.942967Z","steps":["trace[945982709] 'process raft request'  (duration: 87.726366ms)","trace[945982709] 'compare'  (duration: 76.451701ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:28:12.989855Z","caller":"traceutil/trace.go:172","msg":"trace[1571593150] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"141.359678ms","start":"2025-11-22T00:28:12.848480Z","end":"2025-11-22T00:28:12.989840Z","steps":["trace[1571593150] 'process raft request'  (duration: 141.281736ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.180803Z","caller":"traceutil/trace.go:172","msg":"trace[1566919365] linearizableReadLoop","detail":"{readStateIndex:424; appliedIndex:424; }","duration":"190.304294ms","start":"2025-11-22T00:28:12.990479Z","end":"2025-11-22T00:28:13.180783Z","steps":["trace[1566919365] 'read index received'  (duration: 190.294381ms)","trace[1566919365] 'applied index is now lower than readState.Index'  (duration: 8.748µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:28:13.180909Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.410163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:28:13.180937Z","caller":"traceutil/trace.go:172","msg":"trace[1735423235] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"190.454874ms","start":"2025-11-22T00:28:12.990475Z","end":"2025-11-22T00:28:13.180930Z","steps":["trace[1735423235] 'agreement among raft nodes before linearized reading'  (duration: 190.379323ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.180934Z","caller":"traceutil/trace.go:172","msg":"trace[81478439] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"330.650227ms","start":"2025-11-22T00:28:12.850267Z","end":"2025-11-22T00:28:13.180917Z","steps":["trace[81478439] 'process raft request'  (duration: 330.541385ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:28:13.181480Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:28:12.850253Z","time spent":"330.728674ms","remote":"127.0.0.1:34400","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5547,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/pause-044220\" mod_revision:385 > success:<request_put:<key:\"/registry/minions/pause-044220\" value_size:5509 >> failure:<request_range:<key:\"/registry/minions/pause-044220\" > >"}
	{"level":"info","ts":"2025-11-22T00:28:13.429453Z","caller":"traceutil/trace.go:172","msg":"trace[895717490] linearizableReadLoop","detail":"{readStateIndex:426; appliedIndex:426; }","duration":"200.772499ms","start":"2025-11-22T00:28:13.228661Z","end":"2025-11-22T00:28:13.429434Z","steps":["trace[895717490] 'read index received'  (duration: 200.765591ms)","trace[895717490] 'applied index is now lower than readState.Index'  (duration: 5.926µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:28:13.429551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.895028ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:28:13.429575Z","caller":"traceutil/trace.go:172","msg":"trace[745545141] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:407; }","duration":"200.933162ms","start":"2025-11-22T00:28:13.228634Z","end":"2025-11-22T00:28:13.429567Z","steps":["trace[745545141] 'agreement among raft nodes before linearized reading'  (duration: 200.877572ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:28:13.429590Z","caller":"traceutil/trace.go:172","msg":"trace[270600498] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"242.272588ms","start":"2025-11-22T00:28:13.187309Z","end":"2025-11-22T00:28:13.429582Z","steps":["trace[270600498] 'process raft request'  (duration: 242.157757ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:28:31 up  1:10,  0 user,  load average: 5.15, 2.17, 1.27
	Linux pause-044220 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f4bcdbc163f622f7ad75cecdf8d434f7eaf5d5abf5ce74eb74a0f51eaacea0e2] <==
	I1122 00:27:49.078662       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:27:49.079195       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:27:49.079399       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:27:49.079460       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:27:49.079511       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:27:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:27:49.375570       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:27:49.375624       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:27:49.375638       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:27:49.378588       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:27:49.675706       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:27:49.675741       1 metrics.go:72] Registering metrics
	I1122 00:27:49.675802       1 controller.go:711] "Syncing nftables rules"
	I1122 00:27:59.285950       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:27:59.286013       1 main.go:301] handling current node
	I1122 00:28:09.286330       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:28:09.286368       1 main.go:301] handling current node
	I1122 00:28:19.290139       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:28:19.290189       1 main.go:301] handling current node
	I1122 00:28:29.289244       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:28:29.289288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [79e36c09e9fdbb8b8ebf65a808135710cb7702a0ba5485d102074e2baaf898e5] <==
	I1122 00:27:40.291301       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:40.291382       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:27:40.302973       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:40.303047       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:27:40.304493       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1122 00:27:40.314940       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1122 00:27:40.329104       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:27:40.519563       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:27:41.076185       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:27:41.080129       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:27:41.080158       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:27:41.580425       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:27:41.619989       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:27:41.693960       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:27:41.705223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:27:41.707476       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:27:41.712234       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:27:41.740879       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:27:42.552257       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:27:42.560209       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:27:42.565774       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:27:46.741234       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:27:47.840447       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:27:47.891857       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:27:47.896510       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [ecabd3963637033e9f4270fbbeef757d2c6b777cf5058c6dcad3e6914ca9e45c] <==
	I1122 00:27:46.736826       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:27:46.736833       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:27:46.736840       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-044220"
	I1122 00:27:46.736888       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:27:46.738253       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:27:46.738336       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:27:46.738456       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:27:46.738464       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:27:46.738261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:27:46.738493       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:27:46.738498       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:27:46.738638       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:27:46.738701       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:27:46.739663       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:27:46.739944       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:27:46.740106       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:27:46.743965       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:27:46.745421       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:27:46.746549       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:27:46.751686       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:27:46.751812       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:27:46.760168       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:28:01.739299       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1122 00:28:16.740182       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:28:26.741561       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0e4ad5609f7872bdcc433e845fd6360589ac30894e126e3f56e8d3cd82296232] <==
	I1122 00:27:48.732245       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:27:48.787720       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:27:48.889160       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:27:48.889328       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:27:48.889494       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:27:48.909024       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:27:48.909115       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:27:48.914088       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:27:48.914436       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:27:48.914476       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:27:48.916433       1 config.go:200] "Starting service config controller"
	I1122 00:27:48.916454       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:27:48.916703       1 config.go:309] "Starting node config controller"
	I1122 00:27:48.916722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:27:48.915873       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:27:48.916904       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:27:48.917025       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:27:48.917035       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:27:49.017309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:27:49.018249       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:27:49.018386       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:27:49.018400       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [3e14061fd4fccbedc7dfab2388bb9c60556e3392e101adffd2e959a71915c5ba] <==
	E1122 00:27:40.228235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:27:40.228307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:27:40.228346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:27:40.228404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:27:40.228446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:27:40.228554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:27:40.228749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:27:40.228795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:27:40.228837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:27:40.228876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:27:40.228926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:27:40.228966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:27:40.229033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:27:41.121250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:27:41.162318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:27:41.177345       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:27:41.204540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:27:41.234428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:27:41.274182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:27:41.280150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:27:41.301096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:27:41.334628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:27:41.362982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:27:41.408150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1122 00:27:44.014925       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394670    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394690    1302 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.394707    1302 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.410808    1302 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter=""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.410856    1302 container_log_manager.go:154] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.412953    1302 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.412987    1302 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.435571    1302 log.go:32] "ImageFsInfo from image service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.435609    1302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: missing image stats: <nil>"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464936    1302 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464968    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.464985    1302 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.480382    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.623481    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:12 pause-044220 kubelet[1302]: E1122 00:28:12.846529    1302 log.go:32] "Version from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:12 pause-044220 kubelet[1302]: I1122 00:28:12.846618    1302 setters.go:543] "Node became not ready" node="pause-044220" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T00:28:12Z","lastTransitionTime":"2025-11-22T00:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Nov 22 00:28:12 pause-044220 kubelet[1302]: W1122 00:28:12.910434    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: W1122 00:28:13.263817    1302 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.465940    1302 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.465992    1302 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:13 pause-044220 kubelet[1302]: E1122 00:28:13.466011    1302 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 22 00:28:25 pause-044220 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:28:25 pause-044220 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:28:25 pause-044220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:28:25 pause-044220 systemd[1]: kubelet.service: Consumed 1.408s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-044220 -n pause-044220
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-044220 -n pause-044220: exit status 2 (359.974058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-044220 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (232.136575ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:31:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-377321 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-377321 describe deploy/metrics-server -n kube-system: exit status 1 (56.460587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-377321 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-377321
helpers_test.go:243: (dbg) docker inspect old-k8s-version-377321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	        "Created": "2025-11-22T00:30:25.888209771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 234449,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:30:25.938346712Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hosts",
	        "LogPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1-json.log",
	        "Name": "/old-k8s-version-377321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-377321:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-377321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	                "LowerDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-377321",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-377321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-377321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1b8c1add6d294cd75715ee4b5d45128b781f66c6f608637c0a967943f375ab9e",
	            "SandboxKey": "/var/run/docker/netns/1b8c1add6d29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-377321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "476dc93872199ad7652e7290a0113d19cf28252d1edac64765d412bab275e357",
	                    "EndpointID": "97a0f9189b8daf5968974f77dbd92e0bdb4c00f1659e1b3c6c73942976d17746",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "5e:48:3a:67:46:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-377321",
	                        "dffbefc5635f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-377321 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-239758 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo containerd config dump                                                                                                                                                                                                  │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo crio config                                                                                                                                                                                                             │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ delete  │ -p cilium-239758                                                                                                                                                                                                                              │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │ 22 Nov 25 00:29 UTC │
	│ start   │ -p cert-options-524062 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ ssh     │ cert-options-524062 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p cert-options-524062 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p cert-options-524062                                                                                                                                                                                                                        │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:30:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:30:35.332033  237844 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:30:35.332173  237844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:30:35.332186  237844 out.go:374] Setting ErrFile to fd 2...
	I1122 00:30:35.332192  237844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:30:35.332418  237844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:30:35.332855  237844 out.go:368] Setting JSON to false
	I1122 00:30:35.334015  237844 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4384,"bootTime":1763767051,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:30:35.334086  237844 start.go:143] virtualization: kvm guest
	I1122 00:30:35.335805  237844 out.go:179] * [no-preload-983546] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:30:35.336801  237844 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:30:35.336818  237844 notify.go:221] Checking for updates...
	I1122 00:30:35.339006  237844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:30:35.340214  237844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:30:35.341221  237844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:30:35.342269  237844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:30:35.343258  237844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:30:35.344650  237844 config.go:182] Loaded profile config "cert-expiration-624739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:30:35.344738  237844 config.go:182] Loaded profile config "kubernetes-upgrade-619859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:30:35.344818  237844 config.go:182] Loaded profile config "old-k8s-version-377321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:30:35.344883  237844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:30:35.368801  237844 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:30:35.368874  237844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:30:35.425993  237844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:30:35.415676716 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:30:35.426171  237844 docker.go:319] overlay module found
	I1122 00:30:35.428131  237844 out.go:179] * Using the docker driver based on user configuration
	I1122 00:30:35.429039  237844 start.go:309] selected driver: docker
	I1122 00:30:35.429061  237844 start.go:930] validating driver "docker" against <nil>
	I1122 00:30:35.429090  237844 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:30:35.429643  237844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:30:35.484783  237844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:30:35.475269034 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:30:35.484975  237844 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:30:35.485213  237844 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:30:35.486728  237844 out.go:179] * Using Docker driver with root privileges
	I1122 00:30:35.487621  237844 cni.go:84] Creating CNI manager for ""
	I1122 00:30:35.487678  237844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:30:35.487687  237844 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:30:35.487743  237844 start.go:353] cluster config:
	{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:30:35.488898  237844 out.go:179] * Starting "no-preload-983546" primary control-plane node in "no-preload-983546" cluster
	I1122 00:30:35.489979  237844 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:30:35.491068  237844 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:30:35.491906  237844 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:30:35.492011  237844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:30:35.492029  237844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:30:35.492087  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json: {Name:mkd5a47659090e7535d04061dd975edc6d88044e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:35.492191  237844 cache.go:107] acquiring lock: {Name:mk4b1b351b6e05df924b1dea34823a5bae874e1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492257  237844 cache.go:107] acquiring lock: {Name:mk0912b033af5e0dc6737ad3b2b166867675943b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492298  237844 cache.go:107] acquiring lock: {Name:mk2e1ee991a04da9a748a7199e1558e3e5412fee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492299  237844 cache.go:107] acquiring lock: {Name:mk96320d9e02559e4fb5bcee79e63af23abf6b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492300  237844 cache.go:107] acquiring lock: {Name:mk6d624ce3b8b502967383fd9c495ee3efa5f0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492284  237844 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:30:35.492356  237844 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 175.408µs
	I1122 00:30:35.492373  237844 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:30:35.492377  237844 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:35.492411  237844 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:35.492418  237844 cache.go:107] acquiring lock: {Name:mkeb32bd396caf88f92b976cb818c75db7b8b2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492449  237844 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:35.492420  237844 cache.go:107] acquiring lock: {Name:mk12d63b3212c690b6dceb2e93efe384169c5870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492437  237844 cache.go:107] acquiring lock: {Name:mkcfead1c087753e04498b19f3a6339bfee4e556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.492441  237844 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:35.492639  237844 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1122 00:30:35.492650  237844 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:35.492668  237844 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:35.493789  237844 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1122 00:30:35.493807  237844 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:35.493799  237844 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:35.493824  237844 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:35.493795  237844 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:35.493867  237844 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:35.493885  237844 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:35.513257  237844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:30:35.513273  237844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:30:35.513286  237844 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:30:35.513305  237844 start.go:360] acquireMachinesLock for no-preload-983546: {Name:mk180ef84c85822552d32d9baa5d4747338a2875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:30:35.513374  237844 start.go:364] duration metric: took 56.889µs to acquireMachinesLock for "no-preload-983546"
	I1122 00:30:35.513393  237844 start.go:93] Provisioning new machine with config: &{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:30:35.513447  237844 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:30:36.678785  233551 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002399 seconds
	I1122 00:30:36.678930  233551 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:30:36.689878  233551 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:30:37.213459  233551 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:30:37.213850  233551 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-377321 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:30:37.722404  233551 kubeadm.go:319] [bootstrap-token] Using token: kz5sqc.g39kihgh304hzicv
	I1122 00:30:37.723809  233551 out.go:252]   - Configuring RBAC rules ...
	I1122 00:30:37.723994  233551 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:30:37.727929  233551 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:30:37.734166  233551 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:30:37.737694  233551 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:30:37.740200  233551 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:30:37.742984  233551 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:30:37.752985  233551 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:30:37.946920  233551 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:30:38.132221  233551 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:30:38.133322  233551 kubeadm.go:319] 
	I1122 00:30:38.133436  233551 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:30:38.133455  233551 kubeadm.go:319] 
	I1122 00:30:38.133548  233551 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:30:38.133556  233551 kubeadm.go:319] 
	I1122 00:30:38.133592  233551 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:30:38.133663  233551 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:30:38.133730  233551 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:30:38.133751  233551 kubeadm.go:319] 
	I1122 00:30:38.133824  233551 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:30:38.133831  233551 kubeadm.go:319] 
	I1122 00:30:38.133892  233551 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:30:38.133899  233551 kubeadm.go:319] 
	I1122 00:30:38.133966  233551 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:30:38.134079  233551 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:30:38.134213  233551 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:30:38.134230  233551 kubeadm.go:319] 
	I1122 00:30:38.134354  233551 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:30:38.134460  233551 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:30:38.134471  233551 kubeadm.go:319] 
	I1122 00:30:38.134597  233551 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kz5sqc.g39kihgh304hzicv \
	I1122 00:30:38.134754  233551 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:30:38.134788  233551 kubeadm.go:319] 	--control-plane 
	I1122 00:30:38.134798  233551 kubeadm.go:319] 
	I1122 00:30:38.134910  233551 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:30:38.134920  233551 kubeadm.go:319] 
	I1122 00:30:38.135036  233551 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kz5sqc.g39kihgh304hzicv \
	I1122 00:30:38.135210  233551 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:30:38.137299  233551 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:30:38.137423  233551 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:30:38.137438  233551 cni.go:84] Creating CNI manager for ""
	I1122 00:30:38.137446  233551 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:30:38.138894  233551 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:30:34.854433  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:34.854812  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:35.355216  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:35.355544  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:35.855197  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:35.855613  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:36.355223  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:36.355567  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:36.855220  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:36.855649  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:37.354896  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:37.355386  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:37.855073  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:37.855453  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:30:38.355232  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:30:38.355316  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:30:38.386713  218533 cri.go:89] found id: "2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:30:38.386739  218533 cri.go:89] found id: ""
	I1122 00:30:38.386752  218533 logs.go:282] 1 containers: [2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b]
	I1122 00:30:38.386808  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:38.390736  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:30:38.390800  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:30:38.420931  218533 cri.go:89] found id: ""
	I1122 00:30:38.420957  218533 logs.go:282] 0 containers: []
	W1122 00:30:38.420967  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:30:38.420974  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:30:38.421027  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:30:38.447019  218533 cri.go:89] found id: ""
	I1122 00:30:38.447040  218533 logs.go:282] 0 containers: []
	W1122 00:30:38.447047  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:30:38.447072  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:30:38.447129  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:30:38.475675  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:30:38.475711  218533 cri.go:89] found id: ""
	I1122 00:30:38.475721  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:30:38.475769  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:38.479649  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:30:38.479712  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:30:38.507663  218533 cri.go:89] found id: ""
	I1122 00:30:38.507693  218533 logs.go:282] 0 containers: []
	W1122 00:30:38.507703  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:30:38.507712  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:30:38.507773  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:30:38.534401  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:30:38.534423  218533 cri.go:89] found id: ""
	I1122 00:30:38.534433  218533 logs.go:282] 1 containers: [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:30:38.534488  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:38.540248  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:30:38.540307  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:30:38.570978  218533 cri.go:89] found id: ""
	I1122 00:30:38.571006  218533 logs.go:282] 0 containers: []
	W1122 00:30:38.571018  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:30:38.571026  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:30:38.571112  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:30:38.598246  218533 cri.go:89] found id: ""
	I1122 00:30:38.598273  218533 logs.go:282] 0 containers: []
	W1122 00:30:38.598284  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:30:38.598296  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:30:38.598330  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:30:38.612368  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:30:38.612394  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:30:38.682039  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:30:38.682080  218533 logs.go:123] Gathering logs for kube-apiserver [2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b] ...
	I1122 00:30:38.682098  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:30:38.718789  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:30:38.718823  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:30:38.779071  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:30:38.779108  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:30:38.805874  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:30:38.805905  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:30:38.849416  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:30:38.849455  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:30:38.891028  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:30:38.891068  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:30:35.515011  237844 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:30:35.515237  237844 start.go:159] libmachine.API.Create for "no-preload-983546" (driver="docker")
	I1122 00:30:35.515267  237844 client.go:173] LocalClient.Create starting
	I1122 00:30:35.515308  237844 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:30:35.515334  237844 main.go:143] libmachine: Decoding PEM data...
	I1122 00:30:35.515353  237844 main.go:143] libmachine: Parsing certificate...
	I1122 00:30:35.515396  237844 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:30:35.515414  237844 main.go:143] libmachine: Decoding PEM data...
	I1122 00:30:35.515425  237844 main.go:143] libmachine: Parsing certificate...
	I1122 00:30:35.515740  237844 cli_runner.go:164] Run: docker network inspect no-preload-983546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:30:35.531884  237844 cli_runner.go:211] docker network inspect no-preload-983546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:30:35.531933  237844 network_create.go:284] running [docker network inspect no-preload-983546] to gather additional debugging logs...
	I1122 00:30:35.531954  237844 cli_runner.go:164] Run: docker network inspect no-preload-983546
	W1122 00:30:35.551421  237844 cli_runner.go:211] docker network inspect no-preload-983546 returned with exit code 1
	I1122 00:30:35.551445  237844 network_create.go:287] error running [docker network inspect no-preload-983546]: docker network inspect no-preload-983546: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-983546 not found
	I1122 00:30:35.551456  237844 network_create.go:289] output of [docker network inspect no-preload-983546]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-983546 not found
	
	** /stderr **
	I1122 00:30:35.551515  237844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:30:35.568157  237844 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:30:35.568810  237844 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:30:35.569419  237844 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:30:35.570117  237844 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e136d0}
	I1122 00:30:35.570138  237844 network_create.go:124] attempt to create docker network no-preload-983546 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:30:35.570185  237844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-983546 no-preload-983546
	I1122 00:30:35.616962  237844 network_create.go:108] docker network no-preload-983546 192.168.76.0/24 created
	I1122 00:30:35.616986  237844 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-983546" container
	I1122 00:30:35.617035  237844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:30:35.635095  237844 cli_runner.go:164] Run: docker volume create no-preload-983546 --label name.minikube.sigs.k8s.io=no-preload-983546 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:30:35.651832  237844 oci.go:103] Successfully created a docker volume no-preload-983546
	I1122 00:30:35.651898  237844 cli_runner.go:164] Run: docker run --rm --name no-preload-983546-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-983546 --entrypoint /usr/bin/test -v no-preload-983546:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:30:35.675655  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1122 00:30:35.676774  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1122 00:30:35.684740  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1122 00:30:35.693896  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1122 00:30:35.696109  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1122 00:30:35.718767  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1122 00:30:35.732041  237844 cache.go:162] opening:  /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1122 00:30:35.809596  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:30:35.809625  237844 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 317.251669ms
	I1122 00:30:35.809641  237844 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:30:36.056978  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:30:36.057022  237844 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 564.772108ms
	I1122 00:30:36.057045  237844 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:30:36.151323  237844 oci.go:107] Successfully prepared a docker volume no-preload-983546
	I1122 00:30:36.151368  237844 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1122 00:30:36.151454  237844 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:30:36.151492  237844 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:30:36.151539  237844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:30:36.213942  237844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-983546 --name no-preload-983546 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-983546 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-983546 --network no-preload-983546 --ip 192.168.76.2 --volume no-preload-983546:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:30:36.509726  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Running}}
	I1122 00:30:36.527978  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:30:36.545217  237844 cli_runner.go:164] Run: docker exec no-preload-983546 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:30:36.593328  237844 oci.go:144] the created container "no-preload-983546" has a running status.
	I1122 00:30:36.593357  237844 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa...
	I1122 00:30:36.705642  237844 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:30:36.733207  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:30:36.758955  237844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:30:36.758997  237844 kic_runner.go:114] Args: [docker exec --privileged no-preload-983546 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:30:36.813415  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:30:36.843457  237844 machine.go:94] provisionDockerMachine start ...
	I1122 00:30:36.843611  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:36.865872  237844 main.go:143] libmachine: Using SSH client type: native
	I1122 00:30:36.866144  237844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1122 00:30:36.866199  237844 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:30:37.008741  237844 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:30:37.008875  237844 ubuntu.go:182] provisioning hostname "no-preload-983546"
	I1122 00:30:37.009008  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:37.034227  237844 main.go:143] libmachine: Using SSH client type: native
	I1122 00:30:37.034862  237844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1122 00:30:37.034971  237844 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-983546 && echo "no-preload-983546" | sudo tee /etc/hostname
	I1122 00:30:37.172520  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:30:37.172558  237844 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.680259387s
	I1122 00:30:37.172577  237844 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:30:37.181978  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:30:37.182006  237844 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.68961071s
	I1122 00:30:37.182022  237844 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:30:37.185558  237844 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:30:37.185646  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:37.209459  237844 main.go:143] libmachine: Using SSH client type: native
	I1122 00:30:37.209747  237844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1122 00:30:37.209779  237844 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983546/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:30:37.265938  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:30:37.265975  237844 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.773684488s
	I1122 00:30:37.265993  237844 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:30:37.325459  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:30:37.325490  237844 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.833132714s
	I1122 00:30:37.325503  237844 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:30:37.345678  237844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:30:37.345713  237844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:30:37.345759  237844 ubuntu.go:190] setting up certificates
	I1122 00:30:37.345780  237844 provision.go:84] configureAuth start
	I1122 00:30:37.345848  237844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:30:37.367810  237844 provision.go:143] copyHostCerts
	I1122 00:30:37.367871  237844 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:30:37.367884  237844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:30:37.367967  237844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:30:37.368099  237844 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:30:37.368130  237844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:30:37.368181  237844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:30:37.368264  237844 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:30:37.368277  237844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:30:37.368314  237844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:30:37.368383  237844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.no-preload-983546 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983546]
	I1122 00:30:37.588123  237844 provision.go:177] copyRemoteCerts
	I1122 00:30:37.588192  237844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:30:37.588227  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:37.598706  237844 cache.go:157] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:30:37.598742  237844 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.106470172s
	I1122 00:30:37.598756  237844 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:30:37.598777  237844 cache.go:87] Successfully saved all images to host disk.
	I1122 00:30:37.606131  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:37.695388  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:30:37.713470  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:30:37.733042  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:30:37.752522  237844 provision.go:87] duration metric: took 406.728708ms to configureAuth
	I1122 00:30:37.752552  237844 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:30:37.752714  237844 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:30:37.752831  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:37.772478  237844 main.go:143] libmachine: Using SSH client type: native
	I1122 00:30:37.772775  237844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1122 00:30:37.772799  237844 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:30:38.055995  237844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:30:38.056036  237844 machine.go:97] duration metric: took 1.212535661s to provisionDockerMachine
	I1122 00:30:38.056077  237844 client.go:176] duration metric: took 2.540778348s to LocalClient.Create
	I1122 00:30:38.056108  237844 start.go:167] duration metric: took 2.540868437s to libmachine.API.Create "no-preload-983546"
	I1122 00:30:38.056123  237844 start.go:293] postStartSetup for "no-preload-983546" (driver="docker")
	I1122 00:30:38.056137  237844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:30:38.056216  237844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:30:38.056266  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:38.074006  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:38.169092  237844 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:30:38.172524  237844 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:30:38.172555  237844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:30:38.172570  237844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:30:38.172637  237844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:30:38.172734  237844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:30:38.172860  237844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:30:38.180377  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:30:38.200363  237844 start.go:296] duration metric: took 144.225932ms for postStartSetup
	I1122 00:30:38.200665  237844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:30:38.220197  237844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:30:38.220485  237844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:30:38.220539  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:38.240446  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:38.328220  237844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:30:38.332660  237844 start.go:128] duration metric: took 2.819197821s to createHost
	I1122 00:30:38.332685  237844 start.go:83] releasing machines lock for "no-preload-983546", held for 2.819299987s
	I1122 00:30:38.332756  237844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:30:38.351507  237844 ssh_runner.go:195] Run: cat /version.json
	I1122 00:30:38.351572  237844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:30:38.351648  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:38.351574  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:38.371466  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:38.371928  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:38.547076  237844 ssh_runner.go:195] Run: systemctl --version
	I1122 00:30:38.559400  237844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:30:38.600908  237844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:30:38.605557  237844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:30:38.605624  237844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:30:38.632285  237844 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:30:38.632309  237844 start.go:496] detecting cgroup driver to use...
	I1122 00:30:38.632343  237844 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:30:38.632396  237844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:30:38.652201  237844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:30:38.664880  237844 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:30:38.664934  237844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:30:38.685110  237844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:30:38.703619  237844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:30:38.811743  237844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:30:38.924818  237844 docker.go:234] disabling docker service ...
	I1122 00:30:38.924887  237844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:30:38.946475  237844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:30:38.962106  237844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:30:39.065564  237844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:30:39.155734  237844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:30:39.167863  237844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:30:39.181539  237844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:30:39.181586  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.191284  237844 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:30:39.191341  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.199667  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.208047  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.216221  237844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:30:39.223666  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.231704  237844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.244405  237844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:30:39.252301  237844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:30:39.259222  237844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:30:39.266144  237844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:30:39.346309  237844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:30:39.809370  237844 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:30:39.809433  237844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:30:39.813367  237844 start.go:564] Will wait 60s for crictl version
	I1122 00:30:39.813422  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:39.816787  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:30:39.843525  237844 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:30:39.843614  237844 ssh_runner.go:195] Run: crio --version
	I1122 00:30:39.871285  237844 ssh_runner.go:195] Run: crio --version
	I1122 00:30:39.902893  237844 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:30:38.139983  233551 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:30:38.144022  233551 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1122 00:30:38.144040  233551 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:30:38.156560  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:30:38.897903  233551 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:30:38.898001  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:38.898015  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-377321 minikube.k8s.io/updated_at=2025_11_22T00_30_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=old-k8s-version-377321 minikube.k8s.io/primary=true
	I1122 00:30:38.975639  233551 ops.go:34] apiserver oom_adj: -16
	I1122 00:30:38.975649  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:39.476599  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:39.975904  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:39.903996  237844 cli_runner.go:164] Run: docker network inspect no-preload-983546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:30:39.924497  237844 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:30:39.928644  237844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:30:39.938748  237844 kubeadm.go:884] updating cluster {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:30:39.938867  237844 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:30:39.938906  237844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:30:39.962389  237844 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1122 00:30:39.962410  237844 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1122 00:30:39.962491  237844 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:39.962493  237844 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:39.962503  237844 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:39.962530  237844 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:39.962546  237844 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:39.962557  237844 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:39.962563  237844 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:39.962585  237844 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1122 00:30:39.963798  237844 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:39.963817  237844 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1122 00:30:39.963798  237844 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:39.963827  237844 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:39.963798  237844 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:39.963802  237844 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:39.963865  237844 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:39.963948  237844 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.120450  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:40.133631  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:40.134244  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:40.134419  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:40.149487  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.160103  237844 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1122 00:30:40.160147  237844 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:40.160193  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.175544  237844 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1122 00:30:40.175584  237844 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:40.175630  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.176107  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1122 00:30:40.226286  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:40.226658  237844 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1122 00:30:40.226694  237844 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1122 00:30:40.226701  237844 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:40.226726  237844 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:40.226746  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.226774  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.260662  237844 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1122 00:30:40.260687  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:40.260694  237844 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1122 00:30:40.260705  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:40.260706  237844 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.260674  237844 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1122 00:30:40.260758  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.260758  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:40.260778  237844 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1122 00:30:40.260823  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.260724  237844 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:40.260872  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.260726  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:40.291597  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:40.291651  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.291602  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:40.294264  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1122 00:30:40.294314  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:40.294318  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:40.294271  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:40.331591  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1122 00:30:40.331591  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1122 00:30:41.462203  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:30:40.475870  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:40.976138  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:41.475971  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:41.976016  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:42.476026  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:42.976197  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:43.476399  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:43.976299  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:44.476654  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:44.975779  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:40.334252  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.334281  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:40.334387  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1122 00:30:40.334572  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1122 00:30:40.334656  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1122 00:30:40.373607  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1122 00:30:40.373715  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1122 00:30:40.373989  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1122 00:30:40.374104  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1122 00:30:40.374106  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1122 00:30:40.374286  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1122 00:30:40.375840  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1122 00:30:40.375913  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1122 00:30:40.375930  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1122 00:30:40.381302  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1122 00:30:40.381380  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1122 00:30:40.381722  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1122 00:30:40.381751  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1122 00:30:40.426079  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1122 00:30:40.426101  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1122 00:30:40.426118  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1122 00:30:40.426143  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1122 00:30:40.426170  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1122 00:30:40.426181  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1122 00:30:40.426185  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1122 00:30:40.426219  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1122 00:30:40.426239  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1122 00:30:40.426238  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1122 00:30:40.426260  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1122 00:30:40.426273  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1122 00:30:40.446531  237844 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:40.477604  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1122 00:30:40.477633  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1122 00:30:40.477639  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1122 00:30:40.477658  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1122 00:30:40.477609  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1122 00:30:40.477721  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	W1122 00:30:40.512336  237844 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1122 00:30:40.512503  237844 retry.go:31] will retry after 369.291412ms: ssh: rejected: connect failed (open failed)
	I1122 00:30:40.562348  237844 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1122 00:30:40.562407  237844 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:40.562466  237844 ssh_runner.go:195] Run: which crictl
	I1122 00:30:40.562547  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:30:40.584663  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:30:40.682104  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:40.695623  237844 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1122 00:30:40.695683  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1122 00:30:40.716297  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:42.268955  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.573249845s)
	I1122 00:30:42.268981  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1122 00:30:42.269011  237844 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1122 00:30:42.269018  237844 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.552689803s)
	I1122 00:30:42.269094  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1122 00:30:42.269199  237844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:43.388967  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.119847055s)
	I1122 00:30:43.388994  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1122 00:30:43.389012  237844 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1122 00:30:43.389019  237844 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.11979333s)
	I1122 00:30:43.389081  237844 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1122 00:30:43.389110  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1122 00:30:43.389169  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1122 00:30:43.393073  237844 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1122 00:30:43.393103  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1122 00:30:44.621244  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.232106803s)
	I1122 00:30:44.621281  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1122 00:30:44.621318  237844 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1122 00:30:44.621381  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1122 00:30:46.463225  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1122 00:30:46.463294  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:30:46.463371  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:30:46.494742  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:30:46.494765  218533 cri.go:89] found id: "2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:30:46.494770  218533 cri.go:89] found id: ""
	I1122 00:30:46.494785  218533 logs.go:282] 2 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c 2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b]
	I1122 00:30:46.494843  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:46.499135  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:46.502950  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:30:46.503010  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:30:46.530954  218533 cri.go:89] found id: ""
	I1122 00:30:46.530983  218533 logs.go:282] 0 containers: []
	W1122 00:30:46.530993  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:30:46.531000  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:30:46.531098  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:30:46.560611  218533 cri.go:89] found id: ""
	I1122 00:30:46.560682  218533 logs.go:282] 0 containers: []
	W1122 00:30:46.560697  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:30:46.560706  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:30:46.560760  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:30:46.589771  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:30:46.589793  218533 cri.go:89] found id: ""
	I1122 00:30:46.589803  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:30:46.589861  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:46.593711  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:30:46.593775  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:30:46.621650  218533 cri.go:89] found id: ""
	I1122 00:30:46.621673  218533 logs.go:282] 0 containers: []
	W1122 00:30:46.621683  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:30:46.621690  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:30:46.621752  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:30:46.648922  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:30:46.648944  218533 cri.go:89] found id: ""
	I1122 00:30:46.648971  218533 logs.go:282] 1 containers: [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:30:46.649024  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:30:46.652873  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:30:46.652931  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:30:46.679202  218533 cri.go:89] found id: ""
	I1122 00:30:46.679228  218533 logs.go:282] 0 containers: []
	W1122 00:30:46.679239  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:30:46.679250  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:30:46.679306  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:30:46.705386  218533 cri.go:89] found id: ""
	I1122 00:30:46.705407  218533 logs.go:282] 0 containers: []
	W1122 00:30:46.705417  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:30:46.705431  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:30:46.705444  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:30:46.773951  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:30:46.773977  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:30:46.788089  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:30:46.788114  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1122 00:30:45.475793  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:45.976271  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:46.476401  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:46.976351  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:47.475851  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:47.976517  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:48.476519  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:48.976423  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:49.476417  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:49.976108  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:45.672155  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.050745752s)
	I1122 00:30:45.672191  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1122 00:30:45.672217  237844 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1122 00:30:45.672267  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1122 00:30:47.359570  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.687274361s)
	I1122 00:30:47.359603  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1122 00:30:47.359634  237844 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1122 00:30:47.359679  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1122 00:30:47.463551  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1122 00:30:47.463593  237844 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1122 00:30:47.463647  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1122 00:30:50.476206  233551 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:30:50.859224  233551 kubeadm.go:1114] duration metric: took 11.961282886s to wait for elevateKubeSystemPrivileges
	I1122 00:30:50.859515  233551 kubeadm.go:403] duration metric: took 21.158485684s to StartCluster
	I1122 00:30:50.859607  233551 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:50.859847  233551 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:30:50.862066  233551 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:50.862368  233551 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:30:50.862515  233551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:30:50.862608  233551 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:30:50.862689  233551 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-377321"
	I1122 00:30:50.862708  233551 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-377321"
	I1122 00:30:50.862721  233551 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-377321"
	I1122 00:30:50.862740  233551 host.go:66] Checking if "old-k8s-version-377321" exists ...
	I1122 00:30:50.862752  233551 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-377321"
	I1122 00:30:50.862777  233551 config.go:182] Loaded profile config "old-k8s-version-377321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:30:50.863184  233551 cli_runner.go:164] Run: docker container inspect old-k8s-version-377321 --format={{.State.Status}}
	I1122 00:30:50.863564  233551 cli_runner.go:164] Run: docker container inspect old-k8s-version-377321 --format={{.State.Status}}
	I1122 00:30:50.864224  233551 out.go:179] * Verifying Kubernetes components...
	I1122 00:30:50.865549  233551 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:30:50.896895  233551 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:30:50.898681  233551 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:30:50.898761  233551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:30:50.898891  233551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-377321
	I1122 00:30:50.901711  233551 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-377321"
	I1122 00:30:50.901960  233551 host.go:66] Checking if "old-k8s-version-377321" exists ...
	I1122 00:30:50.902479  233551 cli_runner.go:164] Run: docker container inspect old-k8s-version-377321 --format={{.State.Status}}
	I1122 00:30:50.934118  233551 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:30:50.934145  233551 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:30:50.934217  233551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-377321
	I1122 00:30:50.935243  233551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/old-k8s-version-377321/id_rsa Username:docker}
	I1122 00:30:50.965229  233551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/old-k8s-version-377321/id_rsa Username:docker}
	I1122 00:30:51.009143  233551 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:30:51.050328  233551 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:30:51.061993  233551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:30:51.079270  233551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:30:51.273115  233551 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:30:51.274700  233551 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-377321" to be "Ready" ...
	I1122 00:30:51.542790  233551 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:30:51.063504  237844 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.599831272s)
	I1122 00:30:51.063538  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1122 00:30:51.063559  237844 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1122 00:30:51.063599  237844 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1122 00:30:51.743341  237844 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1122 00:30:51.743395  237844 cache_images.go:125] Successfully loaded all cached images
	I1122 00:30:51.743403  237844 cache_images.go:94] duration metric: took 11.780973768s to LoadCachedImages
	I1122 00:30:51.743419  237844 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:30:51.743511  237844 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-983546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:30:51.743572  237844 ssh_runner.go:195] Run: crio config
	I1122 00:30:51.795479  237844 cni.go:84] Creating CNI manager for ""
	I1122 00:30:51.795497  237844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:30:51.795511  237844 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:30:51.795531  237844 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983546 NodeName:no-preload-983546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:30:51.795651  237844 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-983546"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:30:51.795708  237844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:30:51.804448  237844 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1122 00:30:51.804508  237844 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1122 00:30:51.812342  237844 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1122 00:30:51.812360  237844 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1122 00:30:51.812420  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1122 00:30:51.812421  237844 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1122 00:30:51.816106  237844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1122 00:30:51.816130  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1122 00:30:52.747447  237844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:30:52.767849  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1122 00:30:52.772871  237844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1122 00:30:52.772911  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1122 00:30:53.002288  237844 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1122 00:30:53.007391  237844 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1122 00:30:53.007424  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1122 00:30:53.221389  237844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:30:53.231193  237844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:30:53.247500  237844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:30:53.264625  237844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1122 00:30:53.277780  237844 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:30:53.281718  237844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:30:53.293643  237844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:30:53.377863  237844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:30:53.407312  237844 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546 for IP: 192.168.76.2
	I1122 00:30:53.407335  237844 certs.go:195] generating shared ca certs ...
	I1122 00:30:53.407357  237844 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.407586  237844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:30:53.407658  237844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:30:53.407674  237844 certs.go:257] generating profile certs ...
	I1122 00:30:53.407745  237844 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.key
	I1122 00:30:53.407763  237844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt with IP's: []
	I1122 00:30:53.473727  237844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt ...
	I1122 00:30:53.473751  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: {Name:mk0a7e254855978c62aad90a302eed0f40a29f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.473894  237844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.key ...
	I1122 00:30:53.473906  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.key: {Name:mkbf5ac41d1d9c139518c7ffae7d5f2394c696f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.473988  237844 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f
	I1122 00:30:53.474002  237844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt.c827695f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:30:53.597966  237844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt.c827695f ...
	I1122 00:30:53.597990  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt.c827695f: {Name:mk1842b1c6c23dee6f5e6446dcf8783bdf5f2b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.598160  237844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f ...
	I1122 00:30:53.598180  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f: {Name:mke8223ec688fba4b168873db18ff289b0195467 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.598301  237844 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt.c827695f -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt
	I1122 00:30:53.598472  237844 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key
	I1122 00:30:53.598576  237844 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key
	I1122 00:30:53.598609  237844 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt with IP's: []
	I1122 00:30:53.630871  237844 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt ...
	I1122 00:30:53.630893  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt: {Name:mk14410a3f8d4834aa632fd117b61cc961fdfb4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.631028  237844 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key ...
	I1122 00:30:53.631045  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key: {Name:mkc33307f651c5a591b70026fefba059ceb2dfa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:30:53.631321  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:30:53.631374  237844 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:30:53.631397  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:30:53.631438  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:30:53.631470  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:30:53.631517  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:30:53.631592  237844 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:30:53.632386  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:30:53.650423  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:30:53.666849  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:30:53.682674  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:30:53.698738  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:30:53.714778  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:30:53.730592  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:30:53.746383  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:30:53.764881  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:30:53.782615  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:30:53.798531  237844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:30:53.814398  237844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:30:53.825761  237844 ssh_runner.go:195] Run: openssl version
	I1122 00:30:53.831474  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:30:53.839103  237844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:30:53.842449  237844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:30:53.842488  237844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:30:53.876630  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:30:53.884296  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:30:53.891938  237844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:30:53.895380  237844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:30:53.895424  237844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:30:53.928986  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:30:53.936919  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:30:53.944440  237844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:30:53.947924  237844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:30:53.947976  237844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:30:53.984161  237844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:30:53.992170  237844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:30:53.995453  237844 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:30:53.995513  237844 kubeadm.go:401] StartCluster: {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:30:53.995594  237844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:30:53.995640  237844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:30:54.020660  237844 cri.go:89] found id: ""
	I1122 00:30:54.020710  237844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:30:54.028002  237844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:30:54.035088  237844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:30:54.035165  237844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:30:54.042126  237844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:30:54.042142  237844 kubeadm.go:158] found existing configuration files:
	
	I1122 00:30:54.042182  237844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:30:54.049230  237844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:30:54.049275  237844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:30:54.057665  237844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:30:54.065737  237844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:30:54.065796  237844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:30:54.073899  237844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:30:54.081096  237844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:30:54.081155  237844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:30:54.087980  237844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:30:54.095174  237844 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:30:54.095233  237844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:30:54.102404  237844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:30:54.136520  237844 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:30:54.136604  237844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:30:54.157696  237844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:30:54.157799  237844 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:30:54.157881  237844 kubeadm.go:319] OS: Linux
	I1122 00:30:54.157966  237844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:30:54.158020  237844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:30:54.158095  237844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:30:54.158174  237844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:30:54.158248  237844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:30:54.158353  237844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:30:54.158440  237844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:30:54.158501  237844 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:30:54.213628  237844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:30:54.213775  237844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:30:54.213928  237844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:30:54.227207  237844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:30:51.543812  233551 addons.go:530] duration metric: took 681.200162ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:30:51.779188  233551 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-377321" context rescaled to 1 replicas
	W1122 00:30:53.278893  233551 node_ready.go:57] node "old-k8s-version-377321" has "Ready":"False" status (will retry)
	I1122 00:30:54.230151  237844 out.go:252]   - Generating certificates and keys ...
	I1122 00:30:54.230256  237844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:30:54.230378  237844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:30:54.471661  237844 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:30:54.768360  237844 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:30:54.860428  237844 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:30:55.141716  237844 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:30:55.302375  237844 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:30:55.302531  237844 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-983546] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:30:55.499412  237844 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:30:55.499592  237844 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-983546] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:30:55.833862  237844 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:30:55.974084  237844 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:30:56.262208  237844 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:30:56.262327  237844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:30:56.857388  237844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:30:57.149892  237844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:30:57.363676  237844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:30:57.555321  237844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:30:57.679964  237844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:30:57.680464  237844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:30:57.684225  237844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:30:56.848483  218533 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.060345227s)
	W1122 00:30:56.848526  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1122 00:30:56.848537  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:30:56.848557  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:30:56.880762  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:30:56.880787  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:30:56.907654  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:30:56.907678  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:30:56.959137  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:30:56.959161  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:30:56.986621  218533 logs.go:123] Gathering logs for kube-apiserver [2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b] ...
	I1122 00:30:56.986647  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:30:57.017971  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:30:57.018003  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:30:59.560636  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	W1122 00:30:55.777645  233551 node_ready.go:57] node "old-k8s-version-377321" has "Ready":"False" status (will retry)
	W1122 00:30:57.778875  233551 node_ready.go:57] node "old-k8s-version-377321" has "Ready":"False" status (will retry)
	I1122 00:30:57.686046  237844 out.go:252]   - Booting up control plane ...
	I1122 00:30:57.686202  237844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:30:57.686321  237844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:30:57.686423  237844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:30:57.699299  237844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:30:57.699462  237844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:30:57.705564  237844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:30:57.705873  237844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:30:57.705934  237844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:30:57.811796  237844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:30:57.811905  237844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:30:58.313283  237844 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.626658ms
	I1122 00:30:58.316244  237844 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:30:58.316359  237844 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 00:30:58.316473  237844 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:30:58.316583  237844 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:30:59.403436  237844 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.086980967s
	I1122 00:31:00.121800  237844 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.805538206s
	I1122 00:31:01.817253  237844 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500964131s
	I1122 00:31:01.870117  237844 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:31:02.253251  237844 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:31:02.694160  237844 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:31:02.694544  237844 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-983546 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:31:02.704275  237844 kubeadm.go:319] [bootstrap-token] Using token: xof6fj.j861ahvpere80b5v
	I1122 00:31:02.705851  237844 out.go:252]   - Configuring RBAC rules ...
	I1122 00:31:02.706015  237844 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:31:02.709552  237844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:31:02.714407  237844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:31:02.716774  237844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:31:02.720565  237844 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:31:02.722501  237844 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:31:02.730188  237844 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:31:02.914189  237844 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:31:03.278623  237844 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:31:03.279754  237844 kubeadm.go:319] 
	I1122 00:31:03.279845  237844 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:31:03.279857  237844 kubeadm.go:319] 
	I1122 00:31:03.279937  237844 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:31:03.279946  237844 kubeadm.go:319] 
	I1122 00:31:03.280005  237844 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:31:03.280139  237844 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:31:03.280240  237844 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:31:03.280266  237844 kubeadm.go:319] 
	I1122 00:31:03.280353  237844 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:31:03.280363  237844 kubeadm.go:319] 
	I1122 00:31:03.280440  237844 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:31:03.280449  237844 kubeadm.go:319] 
	I1122 00:31:03.280528  237844 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:31:03.280661  237844 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:31:03.280788  237844 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:31:03.280800  237844 kubeadm.go:319] 
	I1122 00:31:03.280923  237844 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:31:03.281032  237844 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:31:03.281041  237844 kubeadm.go:319] 
	I1122 00:31:03.281173  237844 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xof6fj.j861ahvpere80b5v \
	I1122 00:31:03.281299  237844 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:31:03.281328  237844 kubeadm.go:319] 	--control-plane 
	I1122 00:31:03.281336  237844 kubeadm.go:319] 
	I1122 00:31:03.281461  237844 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:31:03.281475  237844 kubeadm.go:319] 
	I1122 00:31:03.281587  237844 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xof6fj.j861ahvpere80b5v \
	I1122 00:31:03.281743  237844 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:31:03.283674  237844 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:31:03.283772  237844 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:31:03.283803  237844 cni.go:84] Creating CNI manager for ""
	I1122 00:31:03.283814  237844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:03.285297  237844 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:31:00.417376  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:55808->192.168.103.2:8443: read: connection reset by peer
	I1122 00:31:00.417446  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:00.417504  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:00.447415  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:00.447434  218533 cri.go:89] found id: "2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:31:00.447439  218533 cri.go:89] found id: ""
	I1122 00:31:00.447446  218533 logs.go:282] 2 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c 2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b]
	I1122 00:31:00.447501  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:00.451220  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:00.454828  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:00.454884  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:00.480475  218533 cri.go:89] found id: ""
	I1122 00:31:00.480500  218533 logs.go:282] 0 containers: []
	W1122 00:31:00.480510  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:00.480517  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:00.480571  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:00.506028  218533 cri.go:89] found id: ""
	I1122 00:31:00.506064  218533 logs.go:282] 0 containers: []
	W1122 00:31:00.506075  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:00.506083  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:00.506137  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:00.531414  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:00.531432  218533 cri.go:89] found id: ""
	I1122 00:31:00.531440  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:00.531487  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:00.534876  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:00.534928  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:00.560178  218533 cri.go:89] found id: ""
	I1122 00:31:00.560197  218533 logs.go:282] 0 containers: []
	W1122 00:31:00.560211  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:00.560217  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:00.560270  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:00.584668  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:00.584689  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:00.584696  218533 cri.go:89] found id: ""
	I1122 00:31:00.584706  218533 logs.go:282] 2 containers: [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:31:00.584774  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:00.588322  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:00.591741  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:00.591806  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:00.616868  218533 cri.go:89] found id: ""
	I1122 00:31:00.616888  218533 logs.go:282] 0 containers: []
	W1122 00:31:00.616894  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:00.616899  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:00.616951  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:00.642513  218533 cri.go:89] found id: ""
	I1122 00:31:00.642537  218533 logs.go:282] 0 containers: []
	W1122 00:31:00.642546  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:00.642560  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:00.642569  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:00.681463  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:00.681484  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:00.710431  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:00.710463  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:00.777412  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:00.777442  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:00.790826  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:00.790844  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:00.846917  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:00.846953  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:00.846970  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:00.894343  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:31:00.894374  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:00.930790  218533 logs.go:123] Gathering logs for kube-apiserver [2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b] ...
	I1122 00:31:00.930836  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f1f63f9f3b5445cfff0c09dd95bf3ea67a98b265db3f29974dbbbe1a316589b"
	I1122 00:31:00.969504  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:00.969534  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:01.000933  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:31:01.000975  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:03.533110  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:03.534821  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:03.534981  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:03.535192  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:03.567172  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:03.567198  218533 cri.go:89] found id: ""
	I1122 00:31:03.567209  218533 logs.go:282] 1 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c]
	I1122 00:31:03.567272  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:03.572232  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:03.572314  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:03.603005  218533 cri.go:89] found id: ""
	I1122 00:31:03.603033  218533 logs.go:282] 0 containers: []
	W1122 00:31:03.603044  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:03.603069  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:03.603137  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:03.632492  218533 cri.go:89] found id: ""
	I1122 00:31:03.632520  218533 logs.go:282] 0 containers: []
	W1122 00:31:03.632531  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:03.632538  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:03.632594  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:03.660592  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:03.660612  218533 cri.go:89] found id: ""
	I1122 00:31:03.660620  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:03.660677  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:03.664412  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:03.664483  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:03.688917  218533 cri.go:89] found id: ""
	I1122 00:31:03.688942  218533 logs.go:282] 0 containers: []
	W1122 00:31:03.688949  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:03.688957  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:03.689007  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:03.715327  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:03.715353  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:03.715359  218533 cri.go:89] found id: ""
	I1122 00:31:03.715369  218533 logs.go:282] 2 containers: [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:31:03.715425  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:03.719229  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:03.722689  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:03.722732  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:03.748024  218533 cri.go:89] found id: ""
	I1122 00:31:03.748074  218533 logs.go:282] 0 containers: []
	W1122 00:31:03.748086  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:03.748094  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:03.748153  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:03.776382  218533 cri.go:89] found id: ""
	I1122 00:31:03.776423  218533 logs.go:282] 0 containers: []
	W1122 00:31:03.776433  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:03.776450  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:03.776465  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:03.790360  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:03.790382  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:03.844309  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:03.844331  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:31:03.844343  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:03.874950  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:03.874975  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:03.921545  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:03.921571  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:03.947442  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:03.947468  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:03.996778  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:03.996815  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:04.034732  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:04.034760  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:04.105425  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:31:04.105457  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	W1122 00:31:00.278185  233551 node_ready.go:57] node "old-k8s-version-377321" has "Ready":"False" status (will retry)
	W1122 00:31:02.278458  233551 node_ready.go:57] node "old-k8s-version-377321" has "Ready":"False" status (will retry)
	I1122 00:31:04.277385  233551 node_ready.go:49] node "old-k8s-version-377321" is "Ready"
	I1122 00:31:04.277418  233551 node_ready.go:38] duration metric: took 13.002691569s for node "old-k8s-version-377321" to be "Ready" ...
	I1122 00:31:04.277437  233551 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:31:04.277490  233551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:31:04.290276  233551 api_server.go:72] duration metric: took 13.42785632s to wait for apiserver process to appear ...
	I1122 00:31:04.290313  233551 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:31:04.290330  233551 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:31:04.294765  233551 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:31:04.295909  233551 api_server.go:141] control plane version: v1.28.0
	I1122 00:31:04.295931  233551 api_server.go:131] duration metric: took 5.612018ms to wait for apiserver health ...
	I1122 00:31:04.295939  233551 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:31:04.299231  233551 system_pods.go:59] 8 kube-system pods found
	I1122 00:31:04.299264  233551 system_pods.go:61] "coredns-5dd5756b68-lwzsc" [70a74499-e309-4258-bf4a-6a5f6e5dc0ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:31:04.299277  233551 system_pods.go:61] "etcd-old-k8s-version-377321" [8aa89581-9bf0-403a-a4a6-5789dbefb05d] Running
	I1122 00:31:04.299286  233551 system_pods.go:61] "kindnet-f996p" [2b309fc4-d552-4e55-9780-980cc67e777e] Running
	I1122 00:31:04.299293  233551 system_pods.go:61] "kube-apiserver-old-k8s-version-377321" [d9ebff84-5ca2-43e3-b840-46a4ef45043d] Running
	I1122 00:31:04.299303  233551 system_pods.go:61] "kube-controller-manager-old-k8s-version-377321" [dffce1be-5dff-4e76-a963-c2927400b0c8] Running
	I1122 00:31:04.299310  233551 system_pods.go:61] "kube-proxy-pz8cc" [e9875757-38e1-4d70-a4aa-0a89d46f8f20] Running
	I1122 00:31:04.299319  233551 system_pods.go:61] "kube-scheduler-old-k8s-version-377321" [d90cfce7-832b-4379-a9f3-7ae9e1fcf56e] Running
	I1122 00:31:04.299328  233551 system_pods.go:61] "storage-provisioner" [f30bfad7-a1a5-4a1b-bc94-07046c111af9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:31:04.299340  233551 system_pods.go:74] duration metric: took 3.393172ms to wait for pod list to return data ...
	I1122 00:31:04.299353  233551 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:31:04.301031  233551 default_sa.go:45] found service account: "default"
	I1122 00:31:04.301064  233551 default_sa.go:55] duration metric: took 1.68764ms for default service account to be created ...
	I1122 00:31:04.301089  233551 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:31:04.303913  233551 system_pods.go:86] 8 kube-system pods found
	I1122 00:31:04.303936  233551 system_pods.go:89] "coredns-5dd5756b68-lwzsc" [70a74499-e309-4258-bf4a-6a5f6e5dc0ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:31:04.303941  233551 system_pods.go:89] "etcd-old-k8s-version-377321" [8aa89581-9bf0-403a-a4a6-5789dbefb05d] Running
	I1122 00:31:04.303955  233551 system_pods.go:89] "kindnet-f996p" [2b309fc4-d552-4e55-9780-980cc67e777e] Running
	I1122 00:31:04.303960  233551 system_pods.go:89] "kube-apiserver-old-k8s-version-377321" [d9ebff84-5ca2-43e3-b840-46a4ef45043d] Running
	I1122 00:31:04.303969  233551 system_pods.go:89] "kube-controller-manager-old-k8s-version-377321" [dffce1be-5dff-4e76-a963-c2927400b0c8] Running
	I1122 00:31:04.303974  233551 system_pods.go:89] "kube-proxy-pz8cc" [e9875757-38e1-4d70-a4aa-0a89d46f8f20] Running
	I1122 00:31:04.303984  233551 system_pods.go:89] "kube-scheduler-old-k8s-version-377321" [d90cfce7-832b-4379-a9f3-7ae9e1fcf56e] Running
	I1122 00:31:04.303991  233551 system_pods.go:89] "storage-provisioner" [f30bfad7-a1a5-4a1b-bc94-07046c111af9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:31:04.304011  233551 retry.go:31] will retry after 282.990527ms: missing components: kube-dns
	I1122 00:31:04.591895  233551 system_pods.go:86] 8 kube-system pods found
	I1122 00:31:04.591927  233551 system_pods.go:89] "coredns-5dd5756b68-lwzsc" [70a74499-e309-4258-bf4a-6a5f6e5dc0ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:31:04.591933  233551 system_pods.go:89] "etcd-old-k8s-version-377321" [8aa89581-9bf0-403a-a4a6-5789dbefb05d] Running
	I1122 00:31:04.591940  233551 system_pods.go:89] "kindnet-f996p" [2b309fc4-d552-4e55-9780-980cc67e777e] Running
	I1122 00:31:04.591944  233551 system_pods.go:89] "kube-apiserver-old-k8s-version-377321" [d9ebff84-5ca2-43e3-b840-46a4ef45043d] Running
	I1122 00:31:04.591948  233551 system_pods.go:89] "kube-controller-manager-old-k8s-version-377321" [dffce1be-5dff-4e76-a963-c2927400b0c8] Running
	I1122 00:31:04.591951  233551 system_pods.go:89] "kube-proxy-pz8cc" [e9875757-38e1-4d70-a4aa-0a89d46f8f20] Running
	I1122 00:31:04.591954  233551 system_pods.go:89] "kube-scheduler-old-k8s-version-377321" [d90cfce7-832b-4379-a9f3-7ae9e1fcf56e] Running
	I1122 00:31:04.591960  233551 system_pods.go:89] "storage-provisioner" [f30bfad7-a1a5-4a1b-bc94-07046c111af9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:31:04.591985  233551 retry.go:31] will retry after 244.563439ms: missing components: kube-dns
	I1122 00:31:04.840463  233551 system_pods.go:86] 8 kube-system pods found
	I1122 00:31:04.840493  233551 system_pods.go:89] "coredns-5dd5756b68-lwzsc" [70a74499-e309-4258-bf4a-6a5f6e5dc0ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:31:04.840498  233551 system_pods.go:89] "etcd-old-k8s-version-377321" [8aa89581-9bf0-403a-a4a6-5789dbefb05d] Running
	I1122 00:31:04.840504  233551 system_pods.go:89] "kindnet-f996p" [2b309fc4-d552-4e55-9780-980cc67e777e] Running
	I1122 00:31:04.840508  233551 system_pods.go:89] "kube-apiserver-old-k8s-version-377321" [d9ebff84-5ca2-43e3-b840-46a4ef45043d] Running
	I1122 00:31:04.840512  233551 system_pods.go:89] "kube-controller-manager-old-k8s-version-377321" [dffce1be-5dff-4e76-a963-c2927400b0c8] Running
	I1122 00:31:04.840515  233551 system_pods.go:89] "kube-proxy-pz8cc" [e9875757-38e1-4d70-a4aa-0a89d46f8f20] Running
	I1122 00:31:04.840518  233551 system_pods.go:89] "kube-scheduler-old-k8s-version-377321" [d90cfce7-832b-4379-a9f3-7ae9e1fcf56e] Running
	I1122 00:31:04.840522  233551 system_pods.go:89] "storage-provisioner" [f30bfad7-a1a5-4a1b-bc94-07046c111af9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:31:04.840538  233551 retry.go:31] will retry after 414.572387ms: missing components: kube-dns
	I1122 00:31:03.286348  237844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:31:03.290576  237844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:31:03.290594  237844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:31:03.304127  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:31:03.500135  237844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:31:03.500213  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:03.500287  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-983546 minikube.k8s.io/updated_at=2025_11_22T00_31_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=no-preload-983546 minikube.k8s.io/primary=true
	I1122 00:31:03.573141  237844 ops.go:34] apiserver oom_adj: -16
	I1122 00:31:03.573174  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:04.073792  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:04.573555  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:05.073300  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:05.260936  233551 system_pods.go:86] 8 kube-system pods found
	I1122 00:31:05.260962  233551 system_pods.go:89] "coredns-5dd5756b68-lwzsc" [70a74499-e309-4258-bf4a-6a5f6e5dc0ea] Running
	I1122 00:31:05.260968  233551 system_pods.go:89] "etcd-old-k8s-version-377321" [8aa89581-9bf0-403a-a4a6-5789dbefb05d] Running
	I1122 00:31:05.260972  233551 system_pods.go:89] "kindnet-f996p" [2b309fc4-d552-4e55-9780-980cc67e777e] Running
	I1122 00:31:05.260975  233551 system_pods.go:89] "kube-apiserver-old-k8s-version-377321" [d9ebff84-5ca2-43e3-b840-46a4ef45043d] Running
	I1122 00:31:05.260979  233551 system_pods.go:89] "kube-controller-manager-old-k8s-version-377321" [dffce1be-5dff-4e76-a963-c2927400b0c8] Running
	I1122 00:31:05.260982  233551 system_pods.go:89] "kube-proxy-pz8cc" [e9875757-38e1-4d70-a4aa-0a89d46f8f20] Running
	I1122 00:31:05.260985  233551 system_pods.go:89] "kube-scheduler-old-k8s-version-377321" [d90cfce7-832b-4379-a9f3-7ae9e1fcf56e] Running
	I1122 00:31:05.260988  233551 system_pods.go:89] "storage-provisioner" [f30bfad7-a1a5-4a1b-bc94-07046c111af9] Running
	I1122 00:31:05.261006  233551 system_pods.go:126] duration metric: took 959.905299ms to wait for k8s-apps to be running ...
	I1122 00:31:05.261014  233551 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:31:05.261071  233551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:31:05.273521  233551 system_svc.go:56] duration metric: took 12.496172ms WaitForService to wait for kubelet
	I1122 00:31:05.273550  233551 kubeadm.go:587] duration metric: took 14.411135442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:31:05.273573  233551 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:31:05.275806  233551 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:31:05.275832  233551 node_conditions.go:123] node cpu capacity is 8
	I1122 00:31:05.275847  233551 node_conditions.go:105] duration metric: took 2.269197ms to run NodePressure ...
	I1122 00:31:05.275860  233551 start.go:242] waiting for startup goroutines ...
	I1122 00:31:05.275867  233551 start.go:247] waiting for cluster config update ...
	I1122 00:31:05.275878  233551 start.go:256] writing updated cluster config ...
	I1122 00:31:05.276130  233551 ssh_runner.go:195] Run: rm -f paused
	I1122 00:31:05.279724  233551 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:31:05.283430  233551 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-lwzsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.287695  233551 pod_ready.go:94] pod "coredns-5dd5756b68-lwzsc" is "Ready"
	I1122 00:31:05.287714  233551 pod_ready.go:86] duration metric: took 4.264557ms for pod "coredns-5dd5756b68-lwzsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.289894  233551 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.293363  233551 pod_ready.go:94] pod "etcd-old-k8s-version-377321" is "Ready"
	I1122 00:31:05.293379  233551 pod_ready.go:86] duration metric: took 3.469319ms for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.295814  233551 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.299232  233551 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-377321" is "Ready"
	I1122 00:31:05.299261  233551 pod_ready.go:86] duration metric: took 3.427428ms for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.301457  233551 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.683951  233551 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-377321" is "Ready"
	I1122 00:31:05.683982  233551 pod_ready.go:86] duration metric: took 382.50444ms for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:05.884450  233551 pod_ready.go:83] waiting for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:06.285488  233551 pod_ready.go:94] pod "kube-proxy-pz8cc" is "Ready"
	I1122 00:31:06.285519  233551 pod_ready.go:86] duration metric: took 401.044652ms for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:06.484280  233551 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:06.883516  233551 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-377321" is "Ready"
	I1122 00:31:06.883548  233551 pod_ready.go:86] duration metric: took 399.234636ms for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:31:06.883564  233551 pod_ready.go:40] duration metric: took 1.603807041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:31:06.929244  233551 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:31:06.930862  233551 out.go:203] 
	W1122 00:31:06.932149  233551 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:31:06.933331  233551 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:31:06.935168  233551 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-377321" cluster and "default" namespace by default
	I1122 00:31:05.573584  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:06.073403  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:06.574140  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:07.074287  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:07.573500  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:08.073305  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:08.574120  237844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:31:08.638560  237844 kubeadm.go:1114] duration metric: took 5.138416328s to wait for elevateKubeSystemPrivileges
	I1122 00:31:08.638595  237844 kubeadm.go:403] duration metric: took 14.643086862s to StartCluster
	I1122 00:31:08.638613  237844 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:08.638696  237844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:08.641946  237844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:08.642225  237844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:31:08.642223  237844 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:31:08.642251  237844 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:31:08.642345  237844 addons.go:70] Setting storage-provisioner=true in profile "no-preload-983546"
	I1122 00:31:08.642365  237844 addons.go:239] Setting addon storage-provisioner=true in "no-preload-983546"
	I1122 00:31:08.642389  237844 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:08.642414  237844 addons.go:70] Setting default-storageclass=true in profile "no-preload-983546"
	I1122 00:31:08.642433  237844 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983546"
	I1122 00:31:08.642486  237844 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:08.642780  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:08.642898  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:08.644555  237844 out.go:179] * Verifying Kubernetes components...
	I1122 00:31:08.645722  237844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:08.665511  237844 addons.go:239] Setting addon default-storageclass=true in "no-preload-983546"
	I1122 00:31:08.665560  237844 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:08.665942  237844 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:31:08.666027  237844 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:08.667184  237844 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:08.667205  237844 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:31:08.667262  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:08.686483  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:08.686512  237844 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:08.686531  237844 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:31:08.686591  237844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:08.711967  237844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:08.730310  237844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:31:08.778973  237844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:08.798761  237844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:08.816032  237844 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:08.880973  237844 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:31:08.882371  237844 node_ready.go:35] waiting up to 6m0s for node "no-preload-983546" to be "Ready" ...
	I1122 00:31:09.104917  237844 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:31:06.635608  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:06.636066  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:06.636136  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:06.636201  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:06.663698  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:06.663717  218533 cri.go:89] found id: ""
	I1122 00:31:06.663724  218533 logs.go:282] 1 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c]
	I1122 00:31:06.663772  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:06.667524  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:06.667584  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:06.693295  218533 cri.go:89] found id: ""
	I1122 00:31:06.693321  218533 logs.go:282] 0 containers: []
	W1122 00:31:06.693332  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:06.693340  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:06.693390  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:06.718899  218533 cri.go:89] found id: ""
	I1122 00:31:06.718925  218533 logs.go:282] 0 containers: []
	W1122 00:31:06.718932  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:06.718938  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:06.718992  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:06.744169  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:06.744187  218533 cri.go:89] found id: ""
	I1122 00:31:06.744198  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:06.744257  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:06.748140  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:06.748200  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:06.773804  218533 cri.go:89] found id: ""
	I1122 00:31:06.773830  218533 logs.go:282] 0 containers: []
	W1122 00:31:06.773837  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:06.773842  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:06.773883  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:06.798850  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:06.798870  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:06.798876  218533 cri.go:89] found id: ""
	I1122 00:31:06.798885  218533 logs.go:282] 2 containers: [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:31:06.798926  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:06.802633  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:06.806135  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:06.806196  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:06.829805  218533 cri.go:89] found id: ""
	I1122 00:31:06.829826  218533 logs.go:282] 0 containers: []
	W1122 00:31:06.829833  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:06.829838  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:06.829880  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:06.853753  218533 cri.go:89] found id: ""
	I1122 00:31:06.853771  218533 logs.go:282] 0 containers: []
	W1122 00:31:06.853778  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:06.853792  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:06.853801  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:06.910725  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:06.910746  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:06.910760  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:06.961731  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:06.961763  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:07.035032  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:07.035070  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:07.050321  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:31:07.050347  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:07.087581  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:07.087615  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:07.118660  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:31:07.118699  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:07.146544  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:07.146580  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:07.186741  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:07.186765  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:09.105859  237844 addons.go:530] duration metric: took 463.6097ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:31:09.385133  237844 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-983546" context rescaled to 1 replicas
	I1122 00:31:09.730965  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:09.731397  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:09.731457  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:09.731516  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:09.764115  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:09.764137  218533 cri.go:89] found id: ""
	I1122 00:31:09.764151  218533 logs.go:282] 1 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c]
	I1122 00:31:09.764213  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:09.768354  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:09.768421  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:09.802269  218533 cri.go:89] found id: ""
	I1122 00:31:09.802291  218533 logs.go:282] 0 containers: []
	W1122 00:31:09.802300  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:09.802308  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:09.802368  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:09.832602  218533 cri.go:89] found id: ""
	I1122 00:31:09.832625  218533 logs.go:282] 0 containers: []
	W1122 00:31:09.832637  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:09.832645  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:09.832704  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:09.865430  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:09.865448  218533 cri.go:89] found id: ""
	I1122 00:31:09.865455  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:09.865500  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:09.870028  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:09.870123  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:09.906767  218533 cri.go:89] found id: ""
	I1122 00:31:09.906792  218533 logs.go:282] 0 containers: []
	W1122 00:31:09.906802  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:09.906810  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:09.906869  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:09.939557  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:09.939581  218533 cri.go:89] found id: "3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:09.939587  218533 cri.go:89] found id: ""
	I1122 00:31:09.939597  218533 logs.go:282] 2 containers: [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4]
	I1122 00:31:09.939645  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:09.944266  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:09.948722  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:09.948787  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:09.983916  218533 cri.go:89] found id: ""
	I1122 00:31:09.983944  218533 logs.go:282] 0 containers: []
	W1122 00:31:09.983955  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:09.983963  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:09.984022  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:10.020361  218533 cri.go:89] found id: ""
	I1122 00:31:10.020388  218533 logs.go:282] 0 containers: []
	W1122 00:31:10.020439  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:10.020461  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:10.020475  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:10.040243  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:31:10.040276  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:10.083676  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:10.083717  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:10.116504  218533 logs.go:123] Gathering logs for kube-controller-manager [3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4] ...
	I1122 00:31:10.116547  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3f7cbd4579c805ae7dea11263df67bf0d295eba6ed9bed494de7a64e986301c4"
	I1122 00:31:10.148734  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:10.148765  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:10.203523  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:10.203562  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:10.301017  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:10.301072  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:10.368956  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:10.368973  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:10.368985  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:10.418039  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:10.418074  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:12.948617  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:12.949002  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:12.949081  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:12.949132  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:12.975372  218533 cri.go:89] found id: "ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:12.975408  218533 cri.go:89] found id: ""
	I1122 00:31:12.975419  218533 logs.go:282] 1 containers: [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c]
	I1122 00:31:12.975474  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:12.979543  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:12.979600  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:13.007850  218533 cri.go:89] found id: ""
	I1122 00:31:13.007880  218533 logs.go:282] 0 containers: []
	W1122 00:31:13.007892  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:13.007900  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:13.007957  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:13.044334  218533 cri.go:89] found id: ""
	I1122 00:31:13.044357  218533 logs.go:282] 0 containers: []
	W1122 00:31:13.044367  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:13.044382  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:13.044459  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:13.069289  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:13.069315  218533 cri.go:89] found id: ""
	I1122 00:31:13.069325  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:13.069382  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:13.073136  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:13.073206  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:13.099439  218533 cri.go:89] found id: ""
	I1122 00:31:13.099458  218533 logs.go:282] 0 containers: []
	W1122 00:31:13.099465  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:13.099471  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:13.099531  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:13.123930  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:13.123953  218533 cri.go:89] found id: ""
	I1122 00:31:13.123963  218533 logs.go:282] 1 containers: [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9]
	I1122 00:31:13.124013  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:13.127554  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:13.127610  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:13.152306  218533 cri.go:89] found id: ""
	I1122 00:31:13.152328  218533 logs.go:282] 0 containers: []
	W1122 00:31:13.152337  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:13.152344  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:13.152399  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:13.176656  218533 cri.go:89] found id: ""
	I1122 00:31:13.176679  218533 logs.go:282] 0 containers: []
	W1122 00:31:13.176691  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:13.176701  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:13.176711  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:13.230169  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:13.230191  218533 logs.go:123] Gathering logs for kube-apiserver [ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c] ...
	I1122 00:31:13.230209  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ef4059f9999a8801835467bde827fddc80fdda8dc03f709ad65bce83ac833d0c"
	I1122 00:31:13.261848  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:13.261873  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:13.307128  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:13.307156  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:13.330592  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:13.330614  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:13.369576  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:13.369600  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:13.399014  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:13.399037  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:13.465571  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:13.465599  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Nov 22 00:31:04 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:04.616240689Z" level=info msg="Starting container: 048c61ec7fadec7bb12cd8f9c1f021d00d77bd852e5d1be93460fc23756d0d75" id=2aa06b94-c668-4c47-8c6a-81f78bb101db name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:31:04 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:04.618312663Z" level=info msg="Started container" PID=2129 containerID=048c61ec7fadec7bb12cd8f9c1f021d00d77bd852e5d1be93460fc23756d0d75 description=kube-system/coredns-5dd5756b68-lwzsc/coredns id=2aa06b94-c668-4c47-8c6a-81f78bb101db name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e05ee9a1baa3e5d646df205ed2658845af86271e0c40d26d40c70b8341f5265
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.390637992Z" level=info msg="Running pod sandbox: default/busybox/POD" id=03fdd867-2dfa-4634-a2c7-d2750a0b862e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.390730552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.395376225Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5c5f5ca4fa958403305e33be4f141937a63d123c9710e93e7f6bcd724f8cbdf9 UID:b5061100-b7c0-483b-a449-40e98a2335f6 NetNS:/var/run/netns/dec4c03a-2db2-4df4-bcb5-86ad6b69891b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002806b8}] Aliases:map[]}"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.395420571Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.405937336Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5c5f5ca4fa958403305e33be4f141937a63d123c9710e93e7f6bcd724f8cbdf9 UID:b5061100-b7c0-483b-a449-40e98a2335f6 NetNS:/var/run/netns/dec4c03a-2db2-4df4-bcb5-86ad6b69891b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002806b8}] Aliases:map[]}"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.406095117Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.406831153Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.407689947Z" level=info msg="Ran pod sandbox 5c5f5ca4fa958403305e33be4f141937a63d123c9710e93e7f6bcd724f8cbdf9 with infra container: default/busybox/POD" id=03fdd867-2dfa-4634-a2c7-d2750a0b862e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.41057162Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=28d7eb62-5c0a-442b-a796-be389f5b8e86 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.410672251Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=28d7eb62-5c0a-442b-a796-be389f5b8e86 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.410714816Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=28d7eb62-5c0a-442b-a796-be389f5b8e86 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.41123672Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=de57127d-6fd5-4ac0-a114-6e58369ca60c name=/runtime.v1.ImageService/PullImage
	Nov 22 00:31:07 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:07.41254725Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.020650336Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=de57127d-6fd5-4ac0-a114-6e58369ca60c name=/runtime.v1.ImageService/PullImage
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.021337289Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=997ee33a-5dbf-407a-a139-37ba4da3c7b3 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.022549299Z" level=info msg="Creating container: default/busybox/busybox" id=e42d0c00-f915-463e-9714-50a62d3d200b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.022650661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.026893777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.027460686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.063652846Z" level=info msg="Created container 56af26796b70028b6efec7a86d6593bc12cba465733efd3cc8e5daa21b8dd945: default/busybox/busybox" id=e42d0c00-f915-463e-9714-50a62d3d200b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.064032199Z" level=info msg="Starting container: 56af26796b70028b6efec7a86d6593bc12cba465733efd3cc8e5daa21b8dd945" id=837f7df8-0643-46dc-8ac9-b67899c5c15f name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:31:08 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:08.065526855Z" level=info msg="Started container" PID=2203 containerID=56af26796b70028b6efec7a86d6593bc12cba465733efd3cc8e5daa21b8dd945 description=default/busybox/busybox id=837f7df8-0643-46dc-8ac9-b67899c5c15f name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c5f5ca4fa958403305e33be4f141937a63d123c9710e93e7f6bcd724f8cbdf9
	Nov 22 00:31:14 old-k8s-version-377321 crio[776]: time="2025-11-22T00:31:14.165079914Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	56af26796b700       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   5c5f5ca4fa958       busybox                                          default
	048c61ec7fade       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      10 seconds ago      Running             coredns                   0                   4e05ee9a1baa3       coredns-5dd5756b68-lwzsc                         kube-system
	94b9322b9b678       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   66c77b37d3e3c       storage-provisioner                              kube-system
	6ddeec1ea51ab       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   0f317bf335219       kindnet-f996p                                    kube-system
	8ff18b074b3e1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   225281db9bdfa       kube-proxy-pz8cc                                 kube-system
	d2b2a53fcc565       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   1ae6e6f631fde       kube-apiserver-old-k8s-version-377321            kube-system
	b29e7207e951e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   4a93fb41eec18       kube-scheduler-old-k8s-version-377321            kube-system
	675478266ca78       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   f70a10d25844e       kube-controller-manager-old-k8s-version-377321   kube-system
	4aa9703b5cc6c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   26380f3b4328c       etcd-old-k8s-version-377321                      kube-system
	
	
	==> coredns [048c61ec7fadec7bb12cd8f9c1f021d00d77bd852e5d1be93460fc23756d0d75] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47564 - 42134 "HINFO IN 2715207826765868836.6691441873759794048. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.126698381s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-377321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-377321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-377321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:30:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-377321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:31:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:31:08 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:31:08 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:31:08 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:31:08 +0000   Sat, 22 Nov 2025 00:31:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-377321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                6461bf81-9141-4b24-bd64-39ea1ba5c316
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-lwzsc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-377321                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-f996p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-377321             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-377321    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-pz8cc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-377321             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node old-k8s-version-377321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-377321 event: Registered Node old-k8s-version-377321 in Controller
	  Normal  NodeReady                11s   kubelet          Node old-k8s-version-377321 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [4aa9703b5cc6c826b873595dd29b05bcf71cc86ddc6c02070fc9cfcdb9ed1977] <==
	{"level":"warn","ts":"2025-11-22T00:30:50.851159Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.483154Z","time spent":"367.97748ms","remote":"127.0.0.1:60968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":156,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:108 >> failure:<>"}
	{"level":"info","ts":"2025-11-22T00:30:50.851189Z","caller":"traceutil/trace.go:171","msg":"trace[233592839] linearizableReadLoop","detail":"{readStateIndex:323; appliedIndex:313; }","duration":"326.468064ms","start":"2025-11-22T00:30:50.524711Z","end":"2025-11-22T00:30:50.851179Z","steps":["trace[233592839] 'read index received'  (duration: 50.358905ms)","trace[233592839] 'applied index is now lower than readState.Index'  (duration: 276.108469ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:30:50.851009Z","caller":"traceutil/trace.go:171","msg":"trace[1046642112] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"367.983962ms","start":"2025-11-22T00:30:50.483001Z","end":"2025-11-22T00:30:50.850985Z","steps":["trace[1046642112] 'process raft request'  (duration: 92.02863ms)","trace[1046642112] 'compare'  (duration: 275.095667ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:30:50.851253Z","caller":"traceutil/trace.go:171","msg":"trace[1528857979] transaction","detail":"{read_only:false; response_revision:306; number_of_response:1; }","duration":"366.263961ms","start":"2025-11-22T00:30:50.484982Z","end":"2025-11-22T00:30:50.851246Z","steps":["trace[1528857979] 'process raft request'  (duration: 365.976905ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851308Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.482986Z","time spent":"368.303606ms","remote":"127.0.0.1:60940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":596,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:539 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:30:50.85133Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.48497Z","time spent":"366.328706ms","remote":"127.0.0.1:33022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3748,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kindnet-dc46c54c6\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kindnet-dc46c54c6\" value_size:3681 >> failure:<>"}
	{"level":"info","ts":"2025-11-22T00:30:50.851349Z","caller":"traceutil/trace.go:171","msg":"trace[342486582] transaction","detail":"{read_only:false; response_revision:305; number_of_response:1; }","duration":"366.722921ms","start":"2025-11-22T00:30:50.484613Z","end":"2025-11-22T00:30:50.851336Z","steps":["trace[342486582] 'process raft request'  (duration: 366.263618ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:30:50.851437Z","caller":"traceutil/trace.go:171","msg":"trace[593481944] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"366.030503ms","start":"2025-11-22T00:30:50.485399Z","end":"2025-11-22T00:30:50.851429Z","steps":["trace[593481944] 'process raft request'  (duration: 365.598135ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851484Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.485386Z","time spent":"366.068216ms","remote":"127.0.0.1:33022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2127,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/kube-proxy-5468d454c4\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/kube-proxy-5468d454c4\" value_size:2056 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:30:50.851551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.843002ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"warn","ts":"2025-11-22T00:30:50.851567Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.484601Z","time spent":"366.776241ms","remote":"127.0.0.1:32794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1281,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-wm8v6\" mod_revision:4 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-wm8v6\" value_size:1227 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-wm8v6\" > >"}
	{"level":"info","ts":"2025-11-22T00:30:50.851592Z","caller":"traceutil/trace.go:171","msg":"trace[1033363843] transaction","detail":"{read_only:false; response_revision:311; number_of_response:1; }","duration":"363.901623ms","start":"2025-11-22T00:30:50.487682Z","end":"2025-11-22T00:30:50.851583Z","steps":["trace[1033363843] 'process raft request'  (duration: 363.454017ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:30:50.851593Z","caller":"traceutil/trace.go:171","msg":"trace[1725210898] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:311; }","duration":"326.901481ms","start":"2025-11-22T00:30:50.52468Z","end":"2025-11-22T00:30:50.851582Z","steps":["trace[1725210898] 'agreement among raft nodes before linearized reading'  (duration: 326.7866ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851624Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.524669Z","time spent":"326.948085ms","remote":"127.0.0.1:60968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":230,"request content":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" "}
	{"level":"info","ts":"2025-11-22T00:30:50.851633Z","caller":"traceutil/trace.go:171","msg":"trace[901260261] transaction","detail":"{read_only:false; response_revision:308; number_of_response:1; }","duration":"365.070423ms","start":"2025-11-22T00:30:50.486551Z","end":"2025-11-22T00:30:50.851622Z","steps":["trace[901260261] 'process raft request'  (duration: 364.468854ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851638Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.487407Z","time spent":"364.203395ms","remote":"127.0.0.1:32986","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3980,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:301 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3931 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-11-22T00:30:50.851671Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.48654Z","time spent":"365.11035ms","remote":"127.0.0.1:32880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2191,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/edit\" mod_revision:79 > success:<request_put:<key:\"/registry/clusterroles/edit\" value_size:2156 >> failure:<request_range:<key:\"/registry/clusterroles/edit\" > >"}
	{"level":"info","ts":"2025-11-22T00:30:50.851692Z","caller":"traceutil/trace.go:171","msg":"trace[2061881299] transaction","detail":"{read_only:false; response_revision:310; number_of_response:1; }","duration":"364.348057ms","start":"2025-11-22T00:30:50.487334Z","end":"2025-11-22T00:30:50.851682Z","steps":["trace[2061881299] 'process raft request'  (duration: 363.759027ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851741Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.48732Z","time spent":"364.394475ms","remote":"127.0.0.1:32880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":899,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/admin\" mod_revision:78 > success:<request_put:<key:\"/registry/clusterroles/admin\" value_size:863 >> failure:<request_range:<key:\"/registry/clusterroles/admin\" > >"}
	{"level":"warn","ts":"2025-11-22T00:30:50.85156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"307.597545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-11-22T00:30:50.851781Z","caller":"traceutil/trace.go:171","msg":"trace[195361441] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:311; }","duration":"307.821046ms","start":"2025-11-22T00:30:50.543951Z","end":"2025-11-22T00:30:50.851772Z","steps":["trace[195361441] 'agreement among raft nodes before linearized reading'  (duration: 307.577203ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:30:50.8518Z","caller":"traceutil/trace.go:171","msg":"trace[1762570342] transaction","detail":"{read_only:false; response_revision:309; number_of_response:1; }","duration":"364.540427ms","start":"2025-11-22T00:30:50.487253Z","end":"2025-11-22T00:30:50.851793Z","steps":["trace[1762570342] 'process raft request'  (duration: 363.813957ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:30:50.851804Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.543939Z","time spent":"307.857753ms","remote":"127.0.0.1:60968","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":194,"request content":"key:\"/registry/serviceaccounts/default/default\" "}
	{"level":"warn","ts":"2025-11-22T00:30:50.851834Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:30:50.487242Z","time spent":"364.573436ms","remote":"127.0.0.1:32880","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2094,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterroles/view\" mod_revision:80 > success:<request_put:<key:\"/registry/clusterroles/view\" value_size:2059 >> failure:<request_range:<key:\"/registry/clusterroles/view\" > >"}
	
	
	==> kernel <==
	 00:31:15 up  1:13,  0 user,  load average: 3.99, 3.00, 1.75
	Linux old-k8s-version-377321 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ddeec1ea51ab11203fc03b0b9ca84f26748e3abb94f5633928806a475d8147d] <==
	I1122 00:30:53.557414       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:30:53.557618       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:30:53.557750       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:30:53.557770       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:30:53.557789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:30:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:30:53.756959       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:30:53.757186       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:30:53.757199       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:30:53.757387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:30:54.150985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:30:54.151016       1 metrics.go:72] Registering metrics
	I1122 00:30:54.151160       1 controller.go:711] "Syncing nftables rules"
	I1122 00:31:03.765152       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:31:03.765218       1 main.go:301] handling current node
	I1122 00:31:13.760337       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:31:13.760365       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2b2a53fcc5653bd90b79ac8d325c548cdb15cc8ff3e98725a9378d34a8a30ff] <==
	I1122 00:30:34.774424       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:30:34.774444       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:30:34.774468       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:30:34.774476       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:30:34.774482       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:30:34.774490       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:30:34.774746       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:30:34.775456       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:30:34.811146       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:30:34.950400       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:30:35.681316       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:30:35.684896       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:30:35.684912       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:30:36.166257       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:30:36.204741       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:30:36.286722       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:30:36.292142       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:30:36.293044       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:30:36.297936       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:30:36.720983       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:30:37.936254       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:30:37.945759       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:30:37.955146       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:30:50.182691       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1122 00:30:50.483108       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [675478266ca7858b07dcd1c28f09487068b830947c6058bec0de25e75541515b] <==
	I1122 00:30:49.682293       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:30:49.726095       1 shared_informer.go:318] Caches are synced for attach detach
	I1122 00:30:49.727232       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:30:50.112264       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:30:50.169441       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:30:50.169474       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:30:50.188819       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1122 00:30:50.863199       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pz8cc"
	I1122 00:30:50.870183       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-f996p"
	I1122 00:30:50.875826       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-g4zwc"
	I1122 00:30:50.887370       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lwzsc"
	I1122 00:30:50.911884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="723.286045ms"
	I1122 00:30:50.962426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="50.441848ms"
	I1122 00:30:50.963957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.458324ms"
	I1122 00:30:50.964035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.194µs"
	I1122 00:30:51.307696       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:30:51.318943       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-g4zwc"
	I1122 00:30:51.326302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.671893ms"
	I1122 00:30:51.333197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.840789ms"
	I1122 00:30:51.333303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.092µs"
	I1122 00:31:04.268988       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="111.006µs"
	I1122 00:31:04.280598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.949µs"
	I1122 00:31:04.673098       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1122 00:31:05.106733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.390732ms"
	I1122 00:31:05.106848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.281µs"
	
	
	==> kube-proxy [8ff18b074b3e18ad711b6c5e57a07b7209f07d6ba7ea8320338f27e53c349015] <==
	I1122 00:30:51.310301       1 server_others.go:69] "Using iptables proxy"
	I1122 00:30:51.323716       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:30:51.351745       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:30:51.354224       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:30:51.354252       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:30:51.354258       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:30:51.354292       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:30:51.354569       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:30:51.354586       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:30:51.355316       1 config.go:188] "Starting service config controller"
	I1122 00:30:51.355360       1 config.go:315] "Starting node config controller"
	I1122 00:30:51.355397       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:30:51.355399       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:30:51.355341       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:30:51.355453       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:30:51.456347       1 shared_informer.go:318] Caches are synced for node config
	I1122 00:30:51.456353       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:30:51.456380       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b29e7207e951ee175075544e8d15d6eb2eed419870d0661d30089e26a3c144a0] <==
	W1122 00:30:34.733618       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1122 00:30:34.733648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1122 00:30:34.733625       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:30:34.733393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:30:34.733687       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:30:34.733482       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1122 00:30:34.733735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:30:34.733754       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:30:35.547974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1122 00:30:35.549424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1122 00:30:35.613973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:30:35.614005       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:30:35.706762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:30:35.706812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1122 00:30:35.770617       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:30:35.770661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:30:35.897614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1122 00:30:35.897653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1122 00:30:35.919300       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1122 00:30:35.919345       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1122 00:30:35.961674       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:30:35.961811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:30:36.155216       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1122 00:30:36.155256       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1122 00:30:39.130852       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:30:49 old-k8s-version-377321 kubelet[1382]: I1122 00:30:49.653824    1382 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:30:50 old-k8s-version-377321 kubelet[1382]: I1122 00:30:50.887671    1382 topology_manager.go:215] "Topology Admit Handler" podUID="2b309fc4-d552-4e55-9780-980cc67e777e" podNamespace="kube-system" podName="kindnet-f996p"
	Nov 22 00:30:50 old-k8s-version-377321 kubelet[1382]: I1122 00:30:50.887880    1382 topology_manager.go:215] "Topology Admit Handler" podUID="e9875757-38e1-4d70-a4aa-0a89d46f8f20" podNamespace="kube-system" podName="kube-proxy-pz8cc"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.000690    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b309fc4-d552-4e55-9780-980cc67e777e-lib-modules\") pod \"kindnet-f996p\" (UID: \"2b309fc4-d552-4e55-9780-980cc67e777e\") " pod="kube-system/kindnet-f996p"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.000770    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9875757-38e1-4d70-a4aa-0a89d46f8f20-xtables-lock\") pod \"kube-proxy-pz8cc\" (UID: \"e9875757-38e1-4d70-a4aa-0a89d46f8f20\") " pod="kube-system/kube-proxy-pz8cc"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.000807    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9875757-38e1-4d70-a4aa-0a89d46f8f20-lib-modules\") pod \"kube-proxy-pz8cc\" (UID: \"e9875757-38e1-4d70-a4aa-0a89d46f8f20\") " pod="kube-system/kube-proxy-pz8cc"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.000873    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbxw\" (UniqueName: \"kubernetes.io/projected/e9875757-38e1-4d70-a4aa-0a89d46f8f20-kube-api-access-lvbxw\") pod \"kube-proxy-pz8cc\" (UID: \"e9875757-38e1-4d70-a4aa-0a89d46f8f20\") " pod="kube-system/kube-proxy-pz8cc"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.000924    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbmj9\" (UniqueName: \"kubernetes.io/projected/2b309fc4-d552-4e55-9780-980cc67e777e-kube-api-access-jbmj9\") pod \"kindnet-f996p\" (UID: \"2b309fc4-d552-4e55-9780-980cc67e777e\") " pod="kube-system/kindnet-f996p"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.001077    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2b309fc4-d552-4e55-9780-980cc67e777e-cni-cfg\") pod \"kindnet-f996p\" (UID: \"2b309fc4-d552-4e55-9780-980cc67e777e\") " pod="kube-system/kindnet-f996p"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.001213    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b309fc4-d552-4e55-9780-980cc67e777e-xtables-lock\") pod \"kindnet-f996p\" (UID: \"2b309fc4-d552-4e55-9780-980cc67e777e\") " pod="kube-system/kindnet-f996p"
	Nov 22 00:30:51 old-k8s-version-377321 kubelet[1382]: I1122 00:30:51.001314    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9875757-38e1-4d70-a4aa-0a89d46f8f20-kube-proxy\") pod \"kube-proxy-pz8cc\" (UID: \"e9875757-38e1-4d70-a4aa-0a89d46f8f20\") " pod="kube-system/kube-proxy-pz8cc"
	Nov 22 00:30:54 old-k8s-version-377321 kubelet[1382]: I1122 00:30:54.066590    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-f996p" podStartSLOduration=1.893746953 podCreationTimestamp="2025-11-22 00:30:50 +0000 UTC" firstStartedPulling="2025-11-22 00:30:51.208423083 +0000 UTC m=+13.297031069" lastFinishedPulling="2025-11-22 00:30:53.381218802 +0000 UTC m=+15.469826787" observedRunningTime="2025-11-22 00:30:54.066151513 +0000 UTC m=+16.154759524" watchObservedRunningTime="2025-11-22 00:30:54.066542671 +0000 UTC m=+16.155150665"
	Nov 22 00:30:54 old-k8s-version-377321 kubelet[1382]: I1122 00:30:54.066708    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pz8cc" podStartSLOduration=4.066685603 podCreationTimestamp="2025-11-22 00:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:30:52.063873162 +0000 UTC m=+14.152481157" watchObservedRunningTime="2025-11-22 00:30:54.066685603 +0000 UTC m=+16.155293595"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.248721    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.269288    1382 topology_manager.go:215] "Topology Admit Handler" podUID="70a74499-e309-4258-bf4a-6a5f6e5dc0ea" podNamespace="kube-system" podName="coredns-5dd5756b68-lwzsc"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.269676    1382 topology_manager.go:215] "Topology Admit Handler" podUID="f30bfad7-a1a5-4a1b-bc94-07046c111af9" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.292451    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85cgm\" (UniqueName: \"kubernetes.io/projected/f30bfad7-a1a5-4a1b-bc94-07046c111af9-kube-api-access-85cgm\") pod \"storage-provisioner\" (UID: \"f30bfad7-a1a5-4a1b-bc94-07046c111af9\") " pod="kube-system/storage-provisioner"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.292502    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tld7h\" (UniqueName: \"kubernetes.io/projected/70a74499-e309-4258-bf4a-6a5f6e5dc0ea-kube-api-access-tld7h\") pod \"coredns-5dd5756b68-lwzsc\" (UID: \"70a74499-e309-4258-bf4a-6a5f6e5dc0ea\") " pod="kube-system/coredns-5dd5756b68-lwzsc"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.292531    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f30bfad7-a1a5-4a1b-bc94-07046c111af9-tmp\") pod \"storage-provisioner\" (UID: \"f30bfad7-a1a5-4a1b-bc94-07046c111af9\") " pod="kube-system/storage-provisioner"
	Nov 22 00:31:04 old-k8s-version-377321 kubelet[1382]: I1122 00:31:04.292610    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70a74499-e309-4258-bf4a-6a5f6e5dc0ea-config-volume\") pod \"coredns-5dd5756b68-lwzsc\" (UID: \"70a74499-e309-4258-bf4a-6a5f6e5dc0ea\") " pod="kube-system/coredns-5dd5756b68-lwzsc"
	Nov 22 00:31:05 old-k8s-version-377321 kubelet[1382]: I1122 00:31:05.088595    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.088536782 podCreationTimestamp="2025-11-22 00:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:05.088336469 +0000 UTC m=+27.176944463" watchObservedRunningTime="2025-11-22 00:31:05.088536782 +0000 UTC m=+27.177144777"
	Nov 22 00:31:05 old-k8s-version-377321 kubelet[1382]: I1122 00:31:05.099445    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lwzsc" podStartSLOduration=15.099325863 podCreationTimestamp="2025-11-22 00:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:05.099022914 +0000 UTC m=+27.187630942" watchObservedRunningTime="2025-11-22 00:31:05.099325863 +0000 UTC m=+27.187933857"
	Nov 22 00:31:07 old-k8s-version-377321 kubelet[1382]: I1122 00:31:07.089175    1382 topology_manager.go:215] "Topology Admit Handler" podUID="b5061100-b7c0-483b-a449-40e98a2335f6" podNamespace="default" podName="busybox"
	Nov 22 00:31:07 old-k8s-version-377321 kubelet[1382]: I1122 00:31:07.108295    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbpw\" (UniqueName: \"kubernetes.io/projected/b5061100-b7c0-483b-a449-40e98a2335f6-kube-api-access-knbpw\") pod \"busybox\" (UID: \"b5061100-b7c0-483b-a449-40e98a2335f6\") " pod="default/busybox"
	Nov 22 00:31:08 old-k8s-version-377321 kubelet[1382]: E1122 00:31:08.061569    1382 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/crio-56af26796b70028b6efec7a86d6593bc12cba465733efd3cc8e5daa21b8dd945.scope\": RecentStats: unable to find data in memory cache]"
	
	
	==> storage-provisioner [94b9322b9b678e6eb453bd6bcffae7b42836d8a9185b341ba8bca1681b3b2f01] <==
	I1122 00:31:04.630447       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:31:04.641679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:31:04.641733       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:31:04.648739       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:31:04.648804       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfb541a1-68f5-4661-b231-b0efc70ccf66", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-377321_35dd5323-baff-4d03-ac42-1378474d6564 became leader
	I1122 00:31:04.648887       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-377321_35dd5323-baff-4d03-ac42-1378474d6564!
	I1122 00:31:04.749220       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-377321_35dd5323-baff-4d03-ac42-1378474d6564!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-377321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.509063ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:31:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-983546 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-983546 describe deploy/metrics-server -n kube-system: exit status 1 (61.035837ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-983546 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-983546
helpers_test.go:243: (dbg) docker inspect no-preload-983546:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	        "Created": "2025-11-22T00:30:36.232639451Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:30:36.263176415Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352-json.log",
	        "Name": "/no-preload-983546",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-983546:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-983546",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	                "LowerDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-983546",
	                "Source": "/var/lib/docker/volumes/no-preload-983546/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-983546",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-983546",
	                "name.minikube.sigs.k8s.io": "no-preload-983546",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "35df764b9143fbd56f0614db09b63326c014eca9fe2fe85e845b16f6189db5a4",
	            "SandboxKey": "/var/run/docker/netns/35df764b9143",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-983546": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31079b5ab75bb84607cf8165e3a4b768618e4392cb34bdd501083b6a67908eda",
	                    "EndpointID": "d6418fc243cf14cd991f5792ff5122ccbbdfbd3ea30ad7f17676110e8edb6476",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c2:29:97:ad:ea:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-983546",
	                        "c2d293e7736f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-983546 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-239758 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo containerd config dump                                                                                                                                                                                                  │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ ssh     │ -p cilium-239758 sudo crio config                                                                                                                                                                                                             │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │                     │
	│ delete  │ -p cilium-239758                                                                                                                                                                                                                              │ cilium-239758          │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │ 22 Nov 25 00:29 UTC │
	│ start   │ -p cert-options-524062 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:29 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ ssh     │ cert-options-524062 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p cert-options-524062 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p cert-options-524062                                                                                                                                                                                                                        │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:31:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:31:32.482140  246023 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:31:32.482433  246023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:32.482443  246023 out.go:374] Setting ErrFile to fd 2...
	I1122 00:31:32.482447  246023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:32.482629  246023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:31:32.483037  246023 out.go:368] Setting JSON to false
	I1122 00:31:32.484178  246023 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4441,"bootTime":1763767051,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:31:32.484232  246023 start.go:143] virtualization: kvm guest
	I1122 00:31:32.489167  246023 out.go:179] * [old-k8s-version-377321] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:31:32.490776  246023 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:31:32.490789  246023 notify.go:221] Checking for updates...
	I1122 00:31:32.493014  246023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:31:32.494547  246023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:32.495569  246023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:31:32.496488  246023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:31:32.497505  246023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:31:32.498881  246023 config.go:182] Loaded profile config "old-k8s-version-377321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:31:32.500496  246023 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1122 00:31:32.501416  246023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:31:32.526813  246023 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:31:32.526925  246023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:32.582936  246023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:31:32.573392497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:32.583075  246023 docker.go:319] overlay module found
	I1122 00:31:32.584573  246023 out.go:179] * Using the docker driver based on existing profile
	I1122 00:31:32.585841  246023 start.go:309] selected driver: docker
	I1122 00:31:32.585862  246023 start.go:930] validating driver "docker" against &{Name:old-k8s-version-377321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-377321 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:32.585962  246023 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:31:32.586745  246023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:32.646360  246023 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:31:32.636335263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:32.646631  246023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:31:32.646660  246023 cni.go:84] Creating CNI manager for ""
	I1122 00:31:32.646719  246023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:32.646782  246023 start.go:353] cluster config:
	{Name:old-k8s-version-377321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-377321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:32.648588  246023 out.go:179] * Starting "old-k8s-version-377321" primary control-plane node in "old-k8s-version-377321" cluster
	I1122 00:31:32.649941  246023 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:31:32.651088  246023 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:31:32.652020  246023 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:31:32.652072  246023 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1122 00:31:32.652090  246023 cache.go:65] Caching tarball of preloaded images
	I1122 00:31:32.652110  246023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:31:32.652212  246023 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:31:32.652227  246023 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1122 00:31:32.652373  246023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/old-k8s-version-377321/config.json ...
	I1122 00:31:32.676939  246023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:31:32.676962  246023 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:31:32.676980  246023 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:31:32.677010  246023 start.go:360] acquireMachinesLock for old-k8s-version-377321: {Name:mk57ad4831fe6e59b3ff68e03595378dc0d430dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:32.677097  246023 start.go:364] duration metric: took 51.914µs to acquireMachinesLock for "old-k8s-version-377321"
	I1122 00:31:32.677121  246023 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:31:32.677128  246023 fix.go:54] fixHost starting: 
	I1122 00:31:32.677396  246023 cli_runner.go:164] Run: docker container inspect old-k8s-version-377321 --format={{.State.Status}}
	I1122 00:31:32.697140  246023 fix.go:112] recreateIfNeeded on old-k8s-version-377321: state=Stopped err=<nil>
	W1122 00:31:32.697175  246023 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:31:22 no-preload-983546 crio[773]: time="2025-11-22T00:31:22.202582736Z" level=info msg="Starting container: 1cd33595f9e9dceebed67ab8a9945eaaa98d2bccdeb0ea7c5372fb60330ebbbe" id=b76d4a6a-9dc2-415e-a8b8-ccce1d4aef54 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:31:22 no-preload-983546 crio[773]: time="2025-11-22T00:31:22.204701916Z" level=info msg="Started container" PID=2930 containerID=1cd33595f9e9dceebed67ab8a9945eaaa98d2bccdeb0ea7c5372fb60330ebbbe description=kube-system/coredns-66bc5c9577-4psr2/coredns id=b76d4a6a-9dc2-415e-a8b8-ccce1d4aef54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbebda0d1920727047fafe5d7be185be0157f4b5bdff7771dd65d08abc62cc59
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.28316574Z" level=info msg="Running pod sandbox: default/busybox/POD" id=cb09200e-3e58-4737-9ce0-4df44fa0a3c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.28325575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.28815879Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f2209dfd6badb1020defcd5672f11604a311e7ec02ef3c307900e9b6711ff854 UID:fb74c704-d21d-4567-8e3f-cfa2d8132aa9 NetNS:/var/run/netns/99572ddd-658a-479f-af52-635830a9b845 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000510480}] Aliases:map[]}"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.288184047Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.296857279Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f2209dfd6badb1020defcd5672f11604a311e7ec02ef3c307900e9b6711ff854 UID:fb74c704-d21d-4567-8e3f-cfa2d8132aa9 NetNS:/var/run/netns/99572ddd-658a-479f-af52-635830a9b845 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000510480}] Aliases:map[]}"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.296966812Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.297727326Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.298946679Z" level=info msg="Ran pod sandbox f2209dfd6badb1020defcd5672f11604a311e7ec02ef3c307900e9b6711ff854 with infra container: default/busybox/POD" id=cb09200e-3e58-4737-9ce0-4df44fa0a3c5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.299929521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0bfec5a4-e34e-4e2e-af44-08d7982482b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.300030496Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0bfec5a4-e34e-4e2e-af44-08d7982482b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.300107336Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0bfec5a4-e34e-4e2e-af44-08d7982482b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.300658381Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc141e9c-a309-4617-b551-2747c16df282 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.302014075Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.904746209Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=cc141e9c-a309-4617-b551-2747c16df282 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.905325446Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ddc12963-f34d-4d07-8615-75ebaa0649be name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.906514056Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=84415d70-eaa6-40e0-b28a-43e20b6b6e7b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.909622428Z" level=info msg="Creating container: default/busybox/busybox" id=9d381583-1979-4ab0-a59e-c0cea185c12f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.909708179Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.913126738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.913522648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.938441033Z" level=info msg="Created container ced409578e67b3b9c00d28097be30687063f63776c11d59bb789264211fa342b: default/busybox/busybox" id=9d381583-1979-4ab0-a59e-c0cea185c12f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.938891067Z" level=info msg="Starting container: ced409578e67b3b9c00d28097be30687063f63776c11d59bb789264211fa342b" id=08e202a9-9939-420b-9bc9-ed223ab385ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:31:25 no-preload-983546 crio[773]: time="2025-11-22T00:31:25.940344447Z" level=info msg="Started container" PID=3006 containerID=ced409578e67b3b9c00d28097be30687063f63776c11d59bb789264211fa342b description=default/busybox/busybox id=08e202a9-9939-420b-9bc9-ed223ab385ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=f2209dfd6badb1020defcd5672f11604a311e7ec02ef3c307900e9b6711ff854
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ced409578e67b       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f2209dfd6badb       busybox                                     default
	1cd33595f9e9d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   bbebda0d19207       coredns-66bc5c9577-4psr2                    kube-system
	dc2819a4e6bf1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   4d229a7bf7bcc       storage-provisioner                         kube-system
	7bf912eaef413       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   bcb71f1cc45c9       kindnet-rpr2g                               kube-system
	b07fa991af748       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   482e1d9aa79ac       kube-proxy-gnlfp                            kube-system
	36148a86b116d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   3e0c814b7d7ce       kube-scheduler-no-preload-983546            kube-system
	de6301d33ab66       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   9853e8a73600f       kube-controller-manager-no-preload-983546   kube-system
	815ae51ec6d5d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   be9cd7ef08bdc       kube-apiserver-no-preload-983546            kube-system
	6cc5d68470803       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   4cc3080a75039       etcd-no-preload-983546                      kube-system
	
	
	==> coredns [1cd33595f9e9dceebed67ab8a9945eaaa98d2bccdeb0ea7c5372fb60330ebbbe] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37468 - 62247 "HINFO IN 8789241792486918585.6807896193038996109. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.157418737s
	
	
	==> describe nodes <==
	Name:               no-preload-983546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-983546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-983546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_31_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:31:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-983546
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:31:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:31:21 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:31:21 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:31:21 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:31:21 +0000   Sat, 22 Nov 2025 00:31:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-983546
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                1d18d6ff-8b0a-4769-8dee-cdd1e29786a3
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-4psr2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-983546                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-rpr2g                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-983546             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-983546    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-gnlfp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-983546             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node no-preload-983546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node no-preload-983546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node no-preload-983546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-983546 event: Registered Node no-preload-983546 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-983546 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [6cc5d68470803e9acc5465113d32246b983a2008573a41a18c97c1b4a3fd9661] <==
	{"level":"warn","ts":"2025-11-22T00:30:59.395964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.403379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.409426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.415941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.421983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.428387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.442416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.448403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.455090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:30:59.498945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:02.073299Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.663902ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-22T00:31:02.073387Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.683228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:31:02.073409Z","caller":"traceutil/trace.go:172","msg":"trace[299111674] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:207; }","duration":"138.781828ms","start":"2025-11-22T00:31:01.934608Z","end":"2025-11-22T00:31:02.073390Z","steps":["trace[299111674] 'agreement among raft nodes before linearized reading'  (duration: 53.012056ms)","trace[299111674] 'range keys from in-memory index tree'  (duration: 85.636732ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:02.073434Z","caller":"traceutil/trace.go:172","msg":"trace[1007856819] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:208; }","duration":"138.736057ms","start":"2025-11-22T00:31:01.934688Z","end":"2025-11-22T00:31:02.073424Z","steps":["trace[1007856819] 'agreement among raft nodes before linearized reading'  (duration: 138.660541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:31:02.073299Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.66097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:31:02.073472Z","caller":"traceutil/trace.go:172","msg":"trace[884715350] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:0; response_revision:207; }","duration":"138.851838ms","start":"2025-11-22T00:31:01.934606Z","end":"2025-11-22T00:31:02.073458Z","steps":["trace[884715350] 'agreement among raft nodes before linearized reading'  (duration: 53.003591ms)","trace[884715350] 'range keys from in-memory index tree'  (duration: 85.633039ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:02.073477Z","caller":"traceutil/trace.go:172","msg":"trace[1580888902] transaction","detail":"{read_only:false; response_revision:208; number_of_response:1; }","duration":"201.813638ms","start":"2025-11-22T00:31:01.871644Z","end":"2025-11-22T00:31:02.073458Z","steps":["trace[1580888902] 'process raft request'  (duration: 116.001893ms)","trace[1580888902] 'compare'  (duration: 85.583193ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:02.250098Z","caller":"traceutil/trace.go:172","msg":"trace[2025980935] transaction","detail":"{read_only:false; response_revision:210; number_of_response:1; }","duration":"172.18714ms","start":"2025-11-22T00:31:02.077889Z","end":"2025-11-22T00:31:02.250076Z","steps":["trace[2025980935] 'process raft request'  (duration: 113.120939ms)","trace[2025980935] 'compare'  (duration: 58.957246ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:02.251821Z","caller":"traceutil/trace.go:172","msg":"trace[1157801685] transaction","detail":"{read_only:false; response_revision:211; number_of_response:1; }","duration":"116.989819ms","start":"2025-11-22T00:31:02.134818Z","end":"2025-11-22T00:31:02.251808Z","steps":["trace[1157801685] 'process raft request'  (duration: 116.926832ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:31:02.434879Z","caller":"traceutil/trace.go:172","msg":"trace[379045074] transaction","detail":"{read_only:false; response_revision:213; number_of_response:1; }","duration":"179.018821ms","start":"2025-11-22T00:31:02.255847Z","end":"2025-11-22T00:31:02.434865Z","steps":["trace[379045074] 'process raft request'  (duration: 178.989871ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:31:02.434926Z","caller":"traceutil/trace.go:172","msg":"trace[513658414] transaction","detail":"{read_only:false; response_revision:212; number_of_response:1; }","duration":"180.316236ms","start":"2025-11-22T00:31:02.254587Z","end":"2025-11-22T00:31:02.434903Z","steps":["trace[513658414] 'process raft request'  (duration: 178.278242ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:31:02.692220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.870584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:31:02.692289Z","caller":"traceutil/trace.go:172","msg":"trace[756653144] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:214; }","duration":"183.942601ms","start":"2025-11-22T00:31:02.508323Z","end":"2025-11-22T00:31:02.692265Z","steps":["trace[756653144] 'agreement among raft nodes before linearized reading'  (duration: 56.819137ms)","trace[756653144] 'range keys from in-memory index tree'  (duration: 127.028644ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:31:02.692738Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.105779ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356806607304981 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/rolebindings/kube-system/kubeadm:kubelet-config\" mod_revision:0 > success:<request_put:<key:\"/registry/rolebindings/kube-system/kubeadm:kubelet-config\" value_size:458 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:31:02.692805Z","caller":"traceutil/trace.go:172","msg":"trace[1937690022] transaction","detail":"{read_only:false; response_revision:215; number_of_response:1; }","duration":"189.503956ms","start":"2025-11-22T00:31:02.503289Z","end":"2025-11-22T00:31:02.692793Z","steps":["trace[1937690022] 'process raft request'  (duration: 61.878072ms)","trace[1937690022] 'compare'  (duration: 127.007457ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:31:33 up  1:14,  0 user,  load average: 3.32, 2.91, 1.74
	Linux no-preload-983546 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7bf912eaef413906756c4dbe00fe3e95694136bc4660f6b89a873dce1bffa4bc] <==
	I1122 00:31:11.187703       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:31:11.187939       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:31:11.188103       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:31:11.188121       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:31:11.188143       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:31:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:31:11.389111       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:31:11.389599       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:31:11.389653       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:31:11.389835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:31:11.590599       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:31:11.590619       1 metrics.go:72] Registering metrics
	I1122 00:31:11.590659       1 controller.go:711] "Syncing nftables rules"
	I1122 00:31:21.391222       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:31:21.391300       1 main.go:301] handling current node
	I1122 00:31:31.391133       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:31:31.391171       1 main.go:301] handling current node
	
	
	==> kube-apiserver [815ae51ec6d5d672221ad5f16736d8765350098abee292b837a255cdb18cc33a] <==
	E1122 00:31:00.072292       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1122 00:31:00.081371       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:31:00.084483       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:31:00.084954       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:31:00.090754       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:31:00.090910       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:31:00.275369       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:31:00.881529       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:31:00.886303       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:31:00.886319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:31:01.311621       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:31:01.343267       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:31:01.386255       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:31:01.391229       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:31:01.391923       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:31:01.395596       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:31:02.077537       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:31:02.905540       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:31:02.913302       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:31:02.920245       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:31:07.951703       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:31:08.150537       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:31:08.351366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:31:08.355113       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1122 00:31:32.062282       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:59930: use of closed network connection
	
	
	==> kube-controller-manager [de6301d33ab660d5263ce2a5010a0a79fa9f2aa18c2fb33f7ad55886a299ff9c] <==
	I1122 00:31:07.295331       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:31:07.297827       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:31:07.297856       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:31:07.297928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:31:07.297941       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:31:07.297947       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:31:07.297953       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:31:07.298029       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-983546"
	I1122 00:31:07.298106       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:31:07.298209       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:31:07.298305       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:31:07.298862       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:31:07.298911       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:31:07.299044       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:31:07.299107       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:31:07.299309       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:31:07.299454       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:31:07.300521       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:31:07.300924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-983546" podCIDRs=["10.244.0.0/24"]
	I1122 00:31:07.302396       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:31:07.307605       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:31:07.308768       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:31:07.316205       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:31:07.323415       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:31:22.300280       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b07fa991af74804b7840c066ab8fa4c588d92451694f4ec9120bd8e0069cbdfc] <==
	I1122 00:31:08.974307       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:31:09.042991       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:31:09.143604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:31:09.143633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:31:09.143760       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:31:09.162096       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:31:09.162146       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:31:09.167422       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:31:09.167796       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:31:09.167825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:31:09.169122       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:31:09.169155       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:31:09.169149       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:31:09.169152       1 config.go:200] "Starting service config controller"
	I1122 00:31:09.169174       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:31:09.169196       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:31:09.169225       1 config.go:309] "Starting node config controller"
	I1122 00:31:09.169238       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:31:09.169245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:31:09.269308       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:31:09.269333       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:31:09.269343       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [36148a86b116d1e695329cc1d4988950703e09a6459ae9a2a82383174ea7d1b7] <==
	E1122 00:31:00.116309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:31:00.117738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:31:00.117692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:31:00.117944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:31:00.117985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:31:00.118023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:31:00.118082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:31:00.118174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:31:00.118510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:31:00.118560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:31:00.118606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:31:00.118659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:31:00.118671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:31:00.118736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:31:00.119372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:31:00.119382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:31:00.119493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:31:00.119828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:31:00.119821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:31:00.943295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:31:01.033767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:31:01.045841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:31:01.085896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:31:01.161828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1122 00:31:03.516020       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.995951    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/59f42291-1016-4584-9fdb-5df09910070b-xtables-lock\") pod \"kindnet-rpr2g\" (UID: \"59f42291-1016-4584-9fdb-5df09910070b\") " pod="kube-system/kindnet-rpr2g"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.995997    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/59f42291-1016-4584-9fdb-5df09910070b-lib-modules\") pod \"kindnet-rpr2g\" (UID: \"59f42291-1016-4584-9fdb-5df09910070b\") " pod="kube-system/kindnet-rpr2g"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996020    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54hbj\" (UniqueName: \"kubernetes.io/projected/59f42291-1016-4584-9fdb-5df09910070b-kube-api-access-54hbj\") pod \"kindnet-rpr2g\" (UID: \"59f42291-1016-4584-9fdb-5df09910070b\") " pod="kube-system/kindnet-rpr2g"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996082    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b842766-a9da-46e8-9259-f0cdca13c349-kube-proxy\") pod \"kube-proxy-gnlfp\" (UID: \"0b842766-a9da-46e8-9259-f0cdca13c349\") " pod="kube-system/kube-proxy-gnlfp"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996110    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b842766-a9da-46e8-9259-f0cdca13c349-xtables-lock\") pod \"kube-proxy-gnlfp\" (UID: \"0b842766-a9da-46e8-9259-f0cdca13c349\") " pod="kube-system/kube-proxy-gnlfp"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996132    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b842766-a9da-46e8-9259-f0cdca13c349-lib-modules\") pod \"kube-proxy-gnlfp\" (UID: \"0b842766-a9da-46e8-9259-f0cdca13c349\") " pod="kube-system/kube-proxy-gnlfp"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996151    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhww\" (UniqueName: \"kubernetes.io/projected/0b842766-a9da-46e8-9259-f0cdca13c349-kube-api-access-nzhww\") pod \"kube-proxy-gnlfp\" (UID: \"0b842766-a9da-46e8-9259-f0cdca13c349\") " pod="kube-system/kube-proxy-gnlfp"
	Nov 22 00:31:07 no-preload-983546 kubelet[2303]: I1122 00:31:07.996177    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/59f42291-1016-4584-9fdb-5df09910070b-cni-cfg\") pod \"kindnet-rpr2g\" (UID: \"59f42291-1016-4584-9fdb-5df09910070b\") " pod="kube-system/kindnet-rpr2g"
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.102959    2303 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.102994    2303 projected.go:196] Error preparing data for projected volume kube-api-access-nzhww for pod kube-system/kube-proxy-gnlfp: configmap "kube-root-ca.crt" not found
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.103090    2303 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b842766-a9da-46e8-9259-f0cdca13c349-kube-api-access-nzhww podName:0b842766-a9da-46e8-9259-f0cdca13c349 nodeName:}" failed. No retries permitted until 2025-11-22 00:31:08.603044508 +0000 UTC m=+5.714291287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nzhww" (UniqueName: "kubernetes.io/projected/0b842766-a9da-46e8-9259-f0cdca13c349-kube-api-access-nzhww") pod "kube-proxy-gnlfp" (UID: "0b842766-a9da-46e8-9259-f0cdca13c349") : configmap "kube-root-ca.crt" not found
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.103222    2303 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.103254    2303 projected.go:196] Error preparing data for projected volume kube-api-access-54hbj for pod kube-system/kindnet-rpr2g: configmap "kube-root-ca.crt" not found
	Nov 22 00:31:08 no-preload-983546 kubelet[2303]: E1122 00:31:08.103311    2303 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/59f42291-1016-4584-9fdb-5df09910070b-kube-api-access-54hbj podName:59f42291-1016-4584-9fdb-5df09910070b nodeName:}" failed. No retries permitted until 2025-11-22 00:31:08.603294101 +0000 UTC m=+5.714540890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-54hbj" (UniqueName: "kubernetes.io/projected/59f42291-1016-4584-9fdb-5df09910070b-kube-api-access-54hbj") pod "kindnet-rpr2g" (UID: "59f42291-1016-4584-9fdb-5df09910070b") : configmap "kube-root-ca.crt" not found
	Nov 22 00:31:09 no-preload-983546 kubelet[2303]: I1122 00:31:09.013699    2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gnlfp" podStartSLOduration=2.013680734 podStartE2EDuration="2.013680734s" podCreationTimestamp="2025-11-22 00:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:09.013396867 +0000 UTC m=+6.124643665" watchObservedRunningTime="2025-11-22 00:31:09.013680734 +0000 UTC m=+6.124927532"
	Nov 22 00:31:12 no-preload-983546 kubelet[2303]: I1122 00:31:12.021869    2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rpr2g" podStartSLOduration=2.910874473 podStartE2EDuration="5.021851969s" podCreationTimestamp="2025-11-22 00:31:07 +0000 UTC" firstStartedPulling="2025-11-22 00:31:08.886330198 +0000 UTC m=+5.997576978" lastFinishedPulling="2025-11-22 00:31:10.997307693 +0000 UTC m=+8.108554474" observedRunningTime="2025-11-22 00:31:12.021720405 +0000 UTC m=+9.132967205" watchObservedRunningTime="2025-11-22 00:31:12.021851969 +0000 UTC m=+9.133098767"
	Nov 22 00:31:21 no-preload-983546 kubelet[2303]: I1122 00:31:21.833579    2303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:31:21 no-preload-983546 kubelet[2303]: I1122 00:31:21.884426    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92a4504e-35be-4d9d-86ae-a574cc38590b-config-volume\") pod \"coredns-66bc5c9577-4psr2\" (UID: \"92a4504e-35be-4d9d-86ae-a574cc38590b\") " pod="kube-system/coredns-66bc5c9577-4psr2"
	Nov 22 00:31:21 no-preload-983546 kubelet[2303]: I1122 00:31:21.884630    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmxq\" (UniqueName: \"kubernetes.io/projected/92a4504e-35be-4d9d-86ae-a574cc38590b-kube-api-access-lwmxq\") pod \"coredns-66bc5c9577-4psr2\" (UID: \"92a4504e-35be-4d9d-86ae-a574cc38590b\") " pod="kube-system/coredns-66bc5c9577-4psr2"
	Nov 22 00:31:21 no-preload-983546 kubelet[2303]: I1122 00:31:21.884671    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6c69c5d-deb0-4c04-af56-6a7a594505ca-tmp\") pod \"storage-provisioner\" (UID: \"a6c69c5d-deb0-4c04-af56-6a7a594505ca\") " pod="kube-system/storage-provisioner"
	Nov 22 00:31:21 no-preload-983546 kubelet[2303]: I1122 00:31:21.884693    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfw9m\" (UniqueName: \"kubernetes.io/projected/a6c69c5d-deb0-4c04-af56-6a7a594505ca-kube-api-access-tfw9m\") pod \"storage-provisioner\" (UID: \"a6c69c5d-deb0-4c04-af56-6a7a594505ca\") " pod="kube-system/storage-provisioner"
	Nov 22 00:31:23 no-preload-983546 kubelet[2303]: I1122 00:31:23.049483    2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4psr2" podStartSLOduration=15.049466125 podStartE2EDuration="15.049466125s" podCreationTimestamp="2025-11-22 00:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:23.039691164 +0000 UTC m=+20.150937962" watchObservedRunningTime="2025-11-22 00:31:23.049466125 +0000 UTC m=+20.160712923"
	Nov 22 00:31:23 no-preload-983546 kubelet[2303]: I1122 00:31:23.057976    2303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.057959453 podStartE2EDuration="14.057959453s" podCreationTimestamp="2025-11-22 00:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:31:23.057866375 +0000 UTC m=+20.169113165" watchObservedRunningTime="2025-11-22 00:31:23.057959453 +0000 UTC m=+20.169206251"
	Nov 22 00:31:25 no-preload-983546 kubelet[2303]: I1122 00:31:25.001537    2303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmhn6\" (UniqueName: \"kubernetes.io/projected/fb74c704-d21d-4567-8e3f-cfa2d8132aa9-kube-api-access-rmhn6\") pod \"busybox\" (UID: \"fb74c704-d21d-4567-8e3f-cfa2d8132aa9\") " pod="default/busybox"
	Nov 22 00:31:32 no-preload-983546 kubelet[2303]: E1122 00:31:32.062124    2303 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50752->127.0.0.1:33753: write tcp 127.0.0.1:50752->127.0.0.1:33753: write: broken pipe
	
	
	==> storage-provisioner [dc2819a4e6bf1acaaa3dc97173161cccdcc171e649d2390be1c7bfc28feb58c9] <==
	I1122 00:31:22.215766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:31:22.224361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:31:22.224410       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:31:22.226108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:22.230138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:31:22.230274       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:31:22.230418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-983546_6609771f-9e48-48e0-b136-1060a5ebbe2c!
	I1122 00:31:22.230429       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6198fa6c-b306-4e68-b0dd-7835a65484f8", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-983546_6609771f-9e48-48e0-b136-1060a5ebbe2c became leader
	W1122 00:31:22.232095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:22.235776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:31:22.331356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-983546_6609771f-9e48-48e0-b136-1060a5ebbe2c!
	W1122 00:31:24.239962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:24.244417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:26.246708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:26.249835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:28.252280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:28.255960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:30.258262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:30.262779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:32.266169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:31:32.273124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-983546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-377321 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-377321 --alsologtostderr -v=1: exit status 80 (1.692691064s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-377321 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:32:27.940874  258973 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:32:27.941233  258973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:27.941243  258973 out.go:374] Setting ErrFile to fd 2...
	I1122 00:32:27.941248  258973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:27.941480  258973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:32:27.941702  258973 out.go:368] Setting JSON to false
	I1122 00:32:27.941721  258973 mustload.go:66] Loading cluster: old-k8s-version-377321
	I1122 00:32:27.942081  258973 config.go:182] Loaded profile config "old-k8s-version-377321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:32:27.942445  258973 cli_runner.go:164] Run: docker container inspect old-k8s-version-377321 --format={{.State.Status}}
	I1122 00:32:27.960594  258973 host.go:66] Checking if "old-k8s-version-377321" exists ...
	I1122 00:32:27.960840  258973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:28.018575  258973 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-22 00:32:28.009136682 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:28.019208  258973 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-377321 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:32:28.021620  258973 out.go:179] * Pausing node old-k8s-version-377321 ... 
	I1122 00:32:28.022615  258973 host.go:66] Checking if "old-k8s-version-377321" exists ...
	I1122 00:32:28.022857  258973 ssh_runner.go:195] Run: systemctl --version
	I1122 00:32:28.022900  258973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-377321
	I1122 00:32:28.040076  258973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/old-k8s-version-377321/id_rsa Username:docker}
	I1122 00:32:28.128360  258973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:28.139731  258973 pause.go:52] kubelet running: true
	I1122 00:32:28.139788  258973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:28.302620  258973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:28.302691  258973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:28.369677  258973 cri.go:89] found id: "93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975"
	I1122 00:32:28.369710  258973 cri.go:89] found id: "801c8d5d08f560e17fd4023d35002a9afed8af82fe042078f52484439238fd06"
	I1122 00:32:28.369717  258973 cri.go:89] found id: "6a1f00984a7dff4ce68585b4b0994ccd7b263abf46aef826150cbb2693c2b895"
	I1122 00:32:28.369722  258973 cri.go:89] found id: "570f113a27a5135a9bb473c8bdf01eb25f09ab8108a4e98dd642e15f17472989"
	I1122 00:32:28.369727  258973 cri.go:89] found id: "6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f"
	I1122 00:32:28.369733  258973 cri.go:89] found id: "0c7b31cf741c7a5491efff25f26daaf7e50f1b38c7b0275cb2a437a4babfc650"
	I1122 00:32:28.369738  258973 cri.go:89] found id: "5819251d36741016f113d53581c7c528ace5865eeb58ffe60e69f44d077e7cd2"
	I1122 00:32:28.369743  258973 cri.go:89] found id: "ed98561b5f5aba5d27a95290d74bdb9ae0ac348ec62233efd0e83b347c5ad42b"
	I1122 00:32:28.369747  258973 cri.go:89] found id: "ab6a019fd3f49e6fab48be38ce5872af37de1804d6bf8f07d05a6d98aaedd575"
	I1122 00:32:28.369768  258973 cri.go:89] found id: "1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	I1122 00:32:28.369774  258973 cri.go:89] found id: "d3398b58126a8fcaaa90af41bb9b636f054fe29a545311e069c0bf53e69969c0"
	I1122 00:32:28.369777  258973 cri.go:89] found id: ""
	I1122 00:32:28.369831  258973 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:28.381302  258973 retry.go:31] will retry after 159.346823ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:28Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:32:28.541676  258973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:28.554275  258973 pause.go:52] kubelet running: false
	I1122 00:32:28.554329  258973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:28.692848  258973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:28.692939  258973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:28.756987  258973 cri.go:89] found id: "93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975"
	I1122 00:32:28.757014  258973 cri.go:89] found id: "801c8d5d08f560e17fd4023d35002a9afed8af82fe042078f52484439238fd06"
	I1122 00:32:28.757021  258973 cri.go:89] found id: "6a1f00984a7dff4ce68585b4b0994ccd7b263abf46aef826150cbb2693c2b895"
	I1122 00:32:28.757026  258973 cri.go:89] found id: "570f113a27a5135a9bb473c8bdf01eb25f09ab8108a4e98dd642e15f17472989"
	I1122 00:32:28.757031  258973 cri.go:89] found id: "6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f"
	I1122 00:32:28.757037  258973 cri.go:89] found id: "0c7b31cf741c7a5491efff25f26daaf7e50f1b38c7b0275cb2a437a4babfc650"
	I1122 00:32:28.757040  258973 cri.go:89] found id: "5819251d36741016f113d53581c7c528ace5865eeb58ffe60e69f44d077e7cd2"
	I1122 00:32:28.757043  258973 cri.go:89] found id: "ed98561b5f5aba5d27a95290d74bdb9ae0ac348ec62233efd0e83b347c5ad42b"
	I1122 00:32:28.757045  258973 cri.go:89] found id: "ab6a019fd3f49e6fab48be38ce5872af37de1804d6bf8f07d05a6d98aaedd575"
	I1122 00:32:28.757068  258973 cri.go:89] found id: "1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	I1122 00:32:28.757076  258973 cri.go:89] found id: "d3398b58126a8fcaaa90af41bb9b636f054fe29a545311e069c0bf53e69969c0"
	I1122 00:32:28.757081  258973 cri.go:89] found id: ""
	I1122 00:32:28.757120  258973 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:28.768495  258973 retry.go:31] will retry after 551.679138ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:28Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:32:29.321273  258973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:29.333894  258973 pause.go:52] kubelet running: false
	I1122 00:32:29.333939  258973 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:29.478561  258973 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:29.478648  258973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:29.554172  258973 cri.go:89] found id: "93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975"
	I1122 00:32:29.554197  258973 cri.go:89] found id: "801c8d5d08f560e17fd4023d35002a9afed8af82fe042078f52484439238fd06"
	I1122 00:32:29.554203  258973 cri.go:89] found id: "6a1f00984a7dff4ce68585b4b0994ccd7b263abf46aef826150cbb2693c2b895"
	I1122 00:32:29.554208  258973 cri.go:89] found id: "570f113a27a5135a9bb473c8bdf01eb25f09ab8108a4e98dd642e15f17472989"
	I1122 00:32:29.554211  258973 cri.go:89] found id: "6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f"
	I1122 00:32:29.554223  258973 cri.go:89] found id: "0c7b31cf741c7a5491efff25f26daaf7e50f1b38c7b0275cb2a437a4babfc650"
	I1122 00:32:29.554237  258973 cri.go:89] found id: "5819251d36741016f113d53581c7c528ace5865eeb58ffe60e69f44d077e7cd2"
	I1122 00:32:29.554244  258973 cri.go:89] found id: "ed98561b5f5aba5d27a95290d74bdb9ae0ac348ec62233efd0e83b347c5ad42b"
	I1122 00:32:29.554247  258973 cri.go:89] found id: "ab6a019fd3f49e6fab48be38ce5872af37de1804d6bf8f07d05a6d98aaedd575"
	I1122 00:32:29.554254  258973 cri.go:89] found id: "1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	I1122 00:32:29.554259  258973 cri.go:89] found id: "d3398b58126a8fcaaa90af41bb9b636f054fe29a545311e069c0bf53e69969c0"
	I1122 00:32:29.554262  258973 cri.go:89] found id: ""
	I1122 00:32:29.554300  258973 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:29.567238  258973 out.go:203] 
	W1122 00:32:29.568204  258973 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:32:29.568257  258973 out.go:285] * 
	* 
	W1122 00:32:29.572374  258973 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:32:29.573480  258973 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-377321 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-377321
helpers_test.go:243: (dbg) docker inspect old-k8s-version-377321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	        "Created": "2025-11-22T00:30:25.888209771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246332,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:31:32.723349671Z",
	            "FinishedAt": "2025-11-22T00:31:31.817793072Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hosts",
	        "LogPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1-json.log",
	        "Name": "/old-k8s-version-377321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-377321:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-377321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	                "LowerDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-377321",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-377321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-377321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a48eae4638b94dfb5d133b2065952c7c968095d0c86ef9b9429c6276dbb06902",
	            "SandboxKey": "/var/run/docker/netns/a48eae4638b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-377321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "476dc93872199ad7652e7290a0113d19cf28252d1edac64765d412bab275e357",
	                    "EndpointID": "a7a16947fb8e2b17fe29195e5b2420526809458790ab58dd6c8eb2c8b97d99de",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c6:00:ec:5c:a2:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-377321",
	                        "dffbefc5635f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321: exit status 2 (315.096179ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-377321 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-377321 logs -n 25: (1.08357242s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ ssh     │ cert-options-524062 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p cert-options-524062 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p cert-options-524062                                                                                                                                                                                                                        │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979     │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:31:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:31:50.613786  252747 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:31:50.613899  252747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:50.613911  252747 out.go:374] Setting ErrFile to fd 2...
	I1122 00:31:50.613916  252747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:50.614172  252747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:31:50.614647  252747 out.go:368] Setting JSON to false
	I1122 00:31:50.615814  252747 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4460,"bootTime":1763767051,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:31:50.615873  252747 start.go:143] virtualization: kvm guest
	I1122 00:31:50.617870  252747 out.go:179] * [no-preload-983546] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:31:50.619124  252747 notify.go:221] Checking for updates...
	I1122 00:31:50.619164  252747 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:31:50.620473  252747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:31:50.621715  252747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:50.622926  252747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:31:50.623998  252747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:31:50.625079  252747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:31:50.626775  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:50.627519  252747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:31:50.653690  252747 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:31:50.653793  252747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:50.720537  252747 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:31:50.710138927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:50.720702  252747 docker.go:319] overlay module found
	I1122 00:31:50.722520  252747 out.go:179] * Using the docker driver based on existing profile
	I1122 00:31:50.723640  252747 start.go:309] selected driver: docker
	I1122 00:31:50.723664  252747 start.go:930] validating driver "docker" against &{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:50.723763  252747 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:31:50.724302  252747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:50.785041  252747 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:31:50.775165835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:50.785404  252747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:31:50.785436  252747 cni.go:84] Creating CNI manager for ""
	I1122 00:31:50.785505  252747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:50.785549  252747 start.go:353] cluster config:
	{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:50.787349  252747 out.go:179] * Starting "no-preload-983546" primary control-plane node in "no-preload-983546" cluster
	I1122 00:31:50.792004  252747 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:31:50.793295  252747 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:31:50.794564  252747 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:50.794665  252747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:31:50.794683  252747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:31:50.794898  252747 cache.go:107] acquiring lock: {Name:mk4b1b351b6e05df924b1dea34823a5bae874e1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794939  252747 cache.go:107] acquiring lock: {Name:mk2e1ee991a04da9a748a7199e1558e3e5412fee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794973  252747 cache.go:107] acquiring lock: {Name:mk6d624ce3b8b502967383fd9c495ee3efa5f0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794927  252747 cache.go:107] acquiring lock: {Name:mkcfead1c087753e04498b19f3a6339bfee4e556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794974  252747 cache.go:107] acquiring lock: {Name:mkeb32bd396caf88f92b976cb818c75db7b8b2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795024  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:31:50.795027  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:31:50.795034  252747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 133.763µs
	I1122 00:31:50.795035  252747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 64.826µs
	I1122 00:31:50.795049  252747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:31:50.795062  252747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:31:50.795015  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:31:50.795078  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:31:50.795085  252747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 150.824µs
	I1122 00:31:50.795087  252747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 201.15µs
	I1122 00:31:50.795093  252747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:31:50.795115  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:31:50.795040  252747 cache.go:107] acquiring lock: {Name:mk12d63b3212c690b6dceb2e93efe384169c5870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795133  252747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 204.734µs
	I1122 00:31:50.795125  252747 cache.go:107] acquiring lock: {Name:mk0912b033af5e0dc6737ad3b2b166867675943b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795152  252747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:31:50.795095  252747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:31:50.795156  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:31:50.795184  252747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 193.649µs
	I1122 00:31:50.795193  252747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:31:50.795030  252747 cache.go:107] acquiring lock: {Name:mk96320d9e02559e4fb5bcee79e63af23abf6b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795245  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:31:50.795257  252747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 230.645µs
	I1122 00:31:50.795270  252747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:31:50.795319  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:31:50.795342  252747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 259.065µs
	I1122 00:31:50.795357  252747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:31:50.795371  252747 cache.go:87] Successfully saved all images to host disk.
	I1122 00:31:50.825409  252747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:31:50.825435  252747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:31:50.825462  252747 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:31:50.825496  252747 start.go:360] acquireMachinesLock for no-preload-983546: {Name:mk180ef84c85822552d32d9baa5d4747338a2875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.825576  252747 start.go:364] duration metric: took 56.588µs to acquireMachinesLock for "no-preload-983546"
	I1122 00:31:50.825605  252747 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:31:50.825616  252747 fix.go:54] fixHost starting: 
	I1122 00:31:50.825975  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:50.846644  252747 fix.go:112] recreateIfNeeded on no-preload-983546: state=Stopped err=<nil>
	W1122 00:31:50.846687  252747 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:31:48.142519  250396 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-084979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.290820647s)
	I1122 00:31:48.142557  250396 kic.go:203] duration metric: took 4.290978466s to extract preloaded images to volume ...
	W1122 00:31:48.142663  250396 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:31:48.142708  250396 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:31:48.142755  250396 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:31:48.205487  250396 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-084979 --name embed-certs-084979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-084979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-084979 --network embed-certs-084979 --ip 192.168.94.2 --volume embed-certs-084979:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:31:48.500341  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Running}}
	I1122 00:31:48.518709  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.536654  250396 cli_runner.go:164] Run: docker exec embed-certs-084979 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:31:48.585157  250396 oci.go:144] the created container "embed-certs-084979" has a running status.
	I1122 00:31:48.585190  250396 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa...
	I1122 00:31:48.825142  250396 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:31:48.854801  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.875986  250396 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:31:48.876014  250396 kic_runner.go:114] Args: [docker exec --privileged embed-certs-084979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:31:48.926633  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.944315  250396 machine.go:94] provisionDockerMachine start ...
	I1122 00:31:48.944393  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:48.962453  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:48.962805  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:48.962836  250396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:31:49.093426  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:31:49.093460  250396 ubuntu.go:182] provisioning hostname "embed-certs-084979"
	I1122 00:31:49.093553  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.114572  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.114795  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.114808  250396 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-084979 && echo "embed-certs-084979" | sudo tee /etc/hostname
	I1122 00:31:49.250649  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:31:49.250730  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.271274  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.271583  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.271610  250396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-084979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-084979/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-084979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:31:49.391218  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:31:49.391319  250396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:31:49.391369  250396 ubuntu.go:190] setting up certificates
	I1122 00:31:49.391380  250396 provision.go:84] configureAuth start
	I1122 00:31:49.391428  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:49.407849  250396 provision.go:143] copyHostCerts
	I1122 00:31:49.407897  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:31:49.407905  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:31:49.407968  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:31:49.408065  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:31:49.408077  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:31:49.408115  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:31:49.408181  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:31:49.408189  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:31:49.408220  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:31:49.408277  250396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.embed-certs-084979 san=[127.0.0.1 192.168.94.2 embed-certs-084979 localhost minikube]
	I1122 00:31:49.482513  250396 provision.go:177] copyRemoteCerts
	I1122 00:31:49.482567  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:31:49.482599  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.499242  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:49.589528  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:31:49.607716  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:31:49.624611  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:31:49.641681  250396 provision.go:87] duration metric: took 250.291766ms to configureAuth
	I1122 00:31:49.641704  250396 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:31:49.641865  250396 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:49.641969  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.658924  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.659163  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.659186  250396 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:31:49.909146  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:31:49.909183  250396 machine.go:97] duration metric: took 964.846655ms to provisionDockerMachine
	I1122 00:31:49.909196  250396 client.go:176] duration metric: took 6.612185161s to LocalClient.Create
	I1122 00:31:49.909218  250396 start.go:167] duration metric: took 6.612254944s to libmachine.API.Create "embed-certs-084979"
	I1122 00:31:49.909228  250396 start.go:293] postStartSetup for "embed-certs-084979" (driver="docker")
	I1122 00:31:49.909242  250396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:31:49.909315  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:31:49.909391  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.926710  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.021185  250396 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:31:50.024665  250396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:31:50.024700  250396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:31:50.024716  250396 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:31:50.024763  250396 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:31:50.024833  250396 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:31:50.024916  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:31:50.032263  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:50.051194  250396 start.go:296] duration metric: took 141.953441ms for postStartSetup
	I1122 00:31:50.051556  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:50.070490  250396 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:31:50.070736  250396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:31:50.070774  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.087432  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.174700  250396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:31:50.179003  250396 start.go:128] duration metric: took 6.884067221s to createHost
	I1122 00:31:50.179029  250396 start.go:83] releasing machines lock for "embed-certs-084979", held for 6.884211229s
	I1122 00:31:50.179125  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:50.197076  250396 ssh_runner.go:195] Run: cat /version.json
	I1122 00:31:50.197143  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.197081  250396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:31:50.197259  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.216181  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.216448  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.405463  250396 ssh_runner.go:195] Run: systemctl --version
	I1122 00:31:50.412314  250396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:31:50.449538  250396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:31:50.454257  250396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:31:50.454321  250396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:31:50.481373  250396 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:31:50.481395  250396 start.go:496] detecting cgroup driver to use...
	I1122 00:31:50.481423  250396 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:31:50.481468  250396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:31:50.496946  250396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:31:50.509639  250396 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:31:50.509691  250396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:31:50.529078  250396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:31:50.546653  250396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:31:50.641041  250396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:31:50.747548  250396 docker.go:234] disabling docker service ...
	I1122 00:31:50.747616  250396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:31:50.771023  250396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:31:50.785391  250396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:31:50.873942  250396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:31:50.956488  250396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:31:50.970225  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:31:50.988710  250396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:31:50.988779  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:50.999173  250396 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:31:50.999240  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.009863  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.018586  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.027048  250396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:31:51.035385  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.043855  250396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.057140  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.066136  250396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:31:51.074109  250396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:31:51.082237  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:51.176401  250396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:31:51.314780  250396 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:31:51.314840  250396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:31:51.318722  250396 start.go:564] Will wait 60s for crictl version
	I1122 00:31:51.318783  250396 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.322892  250396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:31:51.351139  250396 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:31:51.351239  250396 ssh_runner.go:195] Run: crio --version
	I1122 00:31:51.382701  250396 ssh_runner.go:195] Run: crio --version
	I1122 00:31:51.420185  250396 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:31:49.823420  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:31:51.824531  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:31:51.421367  250396 cli_runner.go:164] Run: docker network inspect embed-certs-084979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:31:51.440897  250396 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:31:51.444989  250396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:51.455982  250396 kubeadm.go:884] updating cluster {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:31:51.456177  250396 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:51.456229  250396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:51.489550  250396 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:51.489569  250396 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:31:51.489613  250396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:51.513343  250396 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:51.513366  250396 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:31:51.513375  250396 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:31:51.513477  250396 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-084979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:31:51.513562  250396 ssh_runner.go:195] Run: crio config
	I1122 00:31:51.555997  250396 cni.go:84] Creating CNI manager for ""
	I1122 00:31:51.556025  250396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:51.556042  250396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:31:51.556092  250396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-084979 NodeName:embed-certs-084979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:31:51.556218  250396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-084979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:31:51.556274  250396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:31:51.564046  250396 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:31:51.564132  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:31:51.571550  250396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:31:51.583692  250396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:31:51.598125  250396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1122 00:31:51.610121  250396 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:31:51.613403  250396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:51.622567  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:51.701252  250396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:51.725101  250396 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979 for IP: 192.168.94.2
	I1122 00:31:51.725122  250396 certs.go:195] generating shared ca certs ...
	I1122 00:31:51.725143  250396 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.725324  250396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:31:51.725375  250396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:31:51.725395  250396 certs.go:257] generating profile certs ...
	I1122 00:31:51.725464  250396 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key
	I1122 00:31:51.725481  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt with IP's: []
	I1122 00:31:51.785187  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt ...
	I1122 00:31:51.785211  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt: {Name:mk830ed4fcb985c65a974ee02d16ac0f9d685d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.785367  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key ...
	I1122 00:31:51.785379  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key: {Name:mk653952efc7ac0956717f9b7e36d389ed0e2a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.785457  250396 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b
	I1122 00:31:51.785471  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:31:51.999382  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b ...
	I1122 00:31:51.999405  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b: {Name:mkb6532e83c26df6540d503cab858cd41d31a97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.999570  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b ...
	I1122 00:31:51.999584  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b: {Name:mk1a38902bbc52c78732928f3b3e47dae7e2ccc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.999662  250396 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt
	I1122 00:31:51.999745  250396 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key
	I1122 00:31:51.999833  250396 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key
	I1122 00:31:51.999853  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt with IP's: []
	I1122 00:31:52.055968  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt ...
	I1122 00:31:52.055992  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt: {Name:mk03d889db74e292f9976d617aa05998cb02e66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:52.056171  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key ...
	I1122 00:31:52.056189  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key: {Name:mk4c9aa5f036245d68274405484e9ac87026c161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:52.056392  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:31:52.056432  250396 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:31:52.056444  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:31:52.056470  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:31:52.056495  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:31:52.056520  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:31:52.056572  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:52.057206  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:31:52.075454  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:31:52.093570  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:31:52.111215  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:31:52.128121  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:31:52.145590  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:31:52.161883  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:31:52.178682  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:31:52.196086  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:31:52.213742  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:31:52.230306  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:31:52.246583  250396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:31:52.258573  250396 ssh_runner.go:195] Run: openssl version
	I1122 00:31:52.264445  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:31:52.273043  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.276979  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.277025  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.313103  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:31:52.321892  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:31:52.330157  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.333555  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.333604  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.367422  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:31:52.375302  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:31:52.384128  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.387751  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.387800  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.423863  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:31:52.432456  250396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:31:52.435730  250396 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:31:52.435791  250396 kubeadm.go:401] StartCluster: {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:52.435860  250396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:31:52.435913  250396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:31:52.461819  250396 cri.go:89] found id: ""
	I1122 00:31:52.461886  250396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:31:52.469168  250396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:31:52.476311  250396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:31:52.476361  250396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:31:52.483567  250396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:31:52.483586  250396 kubeadm.go:158] found existing configuration files:
	
	I1122 00:31:52.483623  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:31:52.490766  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:31:52.490823  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:31:52.497480  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:31:52.504492  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:31:52.504532  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:31:52.511332  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:31:52.518517  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:31:52.518562  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:31:52.525346  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:31:52.532199  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:31:52.532243  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:31:52.538898  250396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:31:52.593013  250396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:31:52.646878  250396 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:31:51.794137  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:51.794537  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:51.794597  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:51.794642  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:51.822089  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:51.822110  218533 cri.go:89] found id: ""
	I1122 00:31:51.822120  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:51.822178  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.826338  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:51.826389  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:51.852433  218533 cri.go:89] found id: ""
	I1122 00:31:51.852457  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.852466  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:51.852472  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:51.852518  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:51.877216  218533 cri.go:89] found id: ""
	I1122 00:31:51.877239  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.877249  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:51.877255  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:51.877308  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:51.903379  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:51.903399  218533 cri.go:89] found id: ""
	I1122 00:31:51.903409  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:51.903466  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.907316  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:51.907375  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:51.933242  218533 cri.go:89] found id: ""
	I1122 00:31:51.933266  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.933276  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:51.933283  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:51.933340  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:51.958648  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:51.958666  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:51.958672  218533 cri.go:89] found id: ""
	I1122 00:31:51.958681  218533 logs.go:282] 2 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9]
	I1122 00:31:51.958737  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.962259  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.965555  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:51.965610  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:51.990254  218533 cri.go:89] found id: ""
	I1122 00:31:51.990273  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.990281  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:51.990287  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:51.990332  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:52.014313  218533 cri.go:89] found id: ""
	I1122 00:31:52.014334  218533 logs.go:282] 0 containers: []
	W1122 00:31:52.014342  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:52.014359  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:52.014371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:52.027669  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:52.027687  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:52.081269  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:52.081286  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:52.081300  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:52.112861  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:52.112885  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:52.165363  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:52.165385  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:52.248168  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:52.248193  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:52.273678  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:52.273701  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:52.300348  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:52.300371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:52.355540  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:52.355565  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:50.848281  252747 out.go:252] * Restarting existing docker container for "no-preload-983546" ...
	I1122 00:31:50.848356  252747 cli_runner.go:164] Run: docker start no-preload-983546
	I1122 00:31:51.131921  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:51.151622  252747 kic.go:430] container "no-preload-983546" state is running.
	I1122 00:31:51.151958  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:51.171404  252747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:31:51.171627  252747 machine.go:94] provisionDockerMachine start ...
	I1122 00:31:51.171729  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:51.192252  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:51.192557  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:51.192580  252747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:31:51.193349  252747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50394->127.0.0.1:33073: read: connection reset by peer
	I1122 00:31:54.314715  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:31:54.314745  252747 ubuntu.go:182] provisioning hostname "no-preload-983546"
	I1122 00:31:54.314802  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.334974  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.335274  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.335295  252747 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-983546 && echo "no-preload-983546" | sudo tee /etc/hostname
	I1122 00:31:54.465189  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:31:54.465278  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.484420  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.484637  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.484653  252747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983546/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:31:54.608351  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:31:54.608375  252747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:31:54.608404  252747 ubuntu.go:190] setting up certificates
	I1122 00:31:54.608413  252747 provision.go:84] configureAuth start
	I1122 00:31:54.608458  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:54.627818  252747 provision.go:143] copyHostCerts
	I1122 00:31:54.627870  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:31:54.627882  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:31:54.627942  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:31:54.628033  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:31:54.628042  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:31:54.628107  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:31:54.628190  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:31:54.628198  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:31:54.628230  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:31:54.628307  252747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.no-preload-983546 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983546]
	I1122 00:31:54.742310  252747 provision.go:177] copyRemoteCerts
	I1122 00:31:54.742364  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:31:54.742401  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.760782  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:54.854217  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:31:54.872016  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:31:54.889448  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:31:54.906895  252747 provision.go:87] duration metric: took 298.456083ms to configureAuth
	I1122 00:31:54.906922  252747 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:31:54.907146  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:54.907290  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.931380  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.931696  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.931723  252747 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:31:55.260050  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:31:55.260092  252747 machine.go:97] duration metric: took 4.088447626s to provisionDockerMachine
	I1122 00:31:55.260106  252747 start.go:293] postStartSetup for "no-preload-983546" (driver="docker")
	I1122 00:31:55.260120  252747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:31:55.260182  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:31:55.260256  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.281816  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.373431  252747 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:31:55.376810  252747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:31:55.376843  252747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:31:55.376855  252747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:31:55.376905  252747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:31:55.376999  252747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:31:55.377153  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:31:55.384704  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:55.402924  252747 start.go:296] duration metric: took 142.803451ms for postStartSetup
	I1122 00:31:55.402990  252747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:31:55.403084  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.424299  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.517831  252747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:31:55.522329  252747 fix.go:56] duration metric: took 4.696707078s for fixHost
	I1122 00:31:55.522358  252747 start.go:83] releasing machines lock for "no-preload-983546", held for 4.696763245s
	I1122 00:31:55.522429  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:55.540303  252747 ssh_runner.go:195] Run: cat /version.json
	I1122 00:31:55.540353  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.540390  252747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:31:55.540446  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.560177  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.560516  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.696599  252747 ssh_runner.go:195] Run: systemctl --version
	I1122 00:31:55.702926  252747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:31:55.735448  252747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:31:55.739993  252747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:31:55.740069  252747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:31:55.747620  252747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:31:55.747640  252747 start.go:496] detecting cgroup driver to use...
	I1122 00:31:55.747674  252747 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:31:55.747717  252747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:31:55.761064  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:31:55.772340  252747 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:31:55.772403  252747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:31:55.785492  252747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:31:55.796478  252747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:31:55.874681  252747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:31:55.961366  252747 docker.go:234] disabling docker service ...
	I1122 00:31:55.961432  252747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:31:55.974916  252747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:31:55.986497  252747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:31:56.068892  252747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:31:56.148432  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:31:56.161452  252747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:31:56.176024  252747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:31:56.176100  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.184853  252747 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:31:56.184907  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.193105  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.201194  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.209087  252747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:31:56.216413  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.224446  252747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.232310  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.240372  252747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:31:56.247278  252747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:31:56.254025  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:56.328811  252747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:31:56.460550  252747 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:31:56.460619  252747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:31:56.464996  252747 start.go:564] Will wait 60s for crictl version
	I1122 00:31:56.465083  252747 ssh_runner.go:195] Run: which crictl
	I1122 00:31:56.468598  252747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:31:56.493086  252747 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:31:56.493164  252747 ssh_runner.go:195] Run: crio --version
	I1122 00:31:56.522723  252747 ssh_runner.go:195] Run: crio --version
	I1122 00:31:56.550862  252747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:31:54.323974  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:31:56.324607  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:31:56.552289  252747 cli_runner.go:164] Run: docker network inspect no-preload-983546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:31:56.570743  252747 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:31:56.574737  252747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:56.584814  252747 kubeadm.go:884] updating cluster {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:31:56.584908  252747 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:56.584937  252747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:56.618953  252747 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:56.618977  252747 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:31:56.618986  252747 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:31:56.619132  252747 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-983546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:31:56.619210  252747 ssh_runner.go:195] Run: crio config
	I1122 00:31:56.672075  252747 cni.go:84] Creating CNI manager for ""
	I1122 00:31:56.672099  252747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:56.672118  252747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:31:56.672149  252747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983546 NodeName:no-preload-983546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:31:56.672287  252747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-983546"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:31:56.672436  252747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:31:56.683026  252747 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:31:56.683102  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:31:56.692749  252747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:31:56.708604  252747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:31:56.722605  252747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1122 00:31:56.738507  252747 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:31:56.743442  252747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:56.752609  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:56.843504  252747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:56.874364  252747 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546 for IP: 192.168.76.2
	I1122 00:31:56.874392  252747 certs.go:195] generating shared ca certs ...
	I1122 00:31:56.874414  252747 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:56.874581  252747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:31:56.874643  252747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:31:56.874667  252747 certs.go:257] generating profile certs ...
	I1122 00:31:56.874783  252747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.key
	I1122 00:31:56.874848  252747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f
	I1122 00:31:56.874896  252747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key
	I1122 00:31:56.875031  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:31:56.875099  252747 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:31:56.875114  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:31:56.875151  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:31:56.875186  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:31:56.875218  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:31:56.875277  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:56.876110  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:31:56.899488  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:31:56.923289  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:31:56.945822  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:31:56.976511  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:31:56.998348  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:31:57.020565  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:31:57.041861  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:31:57.058625  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:31:57.079381  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:31:57.100945  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:31:57.122401  252747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:31:57.136844  252747 ssh_runner.go:195] Run: openssl version
	I1122 00:31:57.144785  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:31:57.154509  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.157998  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.158043  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.196911  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:31:57.204953  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:31:57.216330  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.221264  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.221316  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.256157  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:31:57.263394  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:31:57.272232  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.275624  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.275665  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.334483  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:31:57.344634  252747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:31:57.348789  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:31:57.401421  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:31:57.456304  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:31:57.516102  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:31:57.573323  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:31:57.627564  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:31:57.680496  252747 kubeadm.go:401] StartCluster: {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:57.680614  252747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:31:57.680688  252747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:31:57.713679  252747 cri.go:89] found id: "15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19"
	I1122 00:31:57.713708  252747 cri.go:89] found id: "2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d"
	I1122 00:31:57.713714  252747 cri.go:89] found id: "2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22"
	I1122 00:31:57.713719  252747 cri.go:89] found id: "748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076"
	I1122 00:31:57.713723  252747 cri.go:89] found id: ""
	I1122 00:31:57.713771  252747 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:31:57.730360  252747 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:31:57Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:31:57.730524  252747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:31:57.743981  252747 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:31:57.743998  252747 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:31:57.744102  252747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:31:57.754761  252747 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:31:57.756511  252747 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-983546" does not appear in /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:57.757200  252747 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-9122/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-983546" cluster setting kubeconfig missing "no-preload-983546" context setting]
	I1122 00:31:57.758247  252747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.760139  252747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:31:57.771135  252747 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:31:57.771164  252747 kubeadm.go:602] duration metric: took 27.159505ms to restartPrimaryControlPlane
	I1122 00:31:57.771179  252747 kubeadm.go:403] duration metric: took 90.693509ms to StartCluster
	I1122 00:31:57.771242  252747 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.771303  252747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:57.772922  252747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.773154  252747 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:31:57.773373  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:57.773425  252747 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:31:57.773502  252747 addons.go:70] Setting storage-provisioner=true in profile "no-preload-983546"
	I1122 00:31:57.773525  252747 addons.go:239] Setting addon storage-provisioner=true in "no-preload-983546"
	W1122 00:31:57.773533  252747 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:31:57.773559  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.773633  252747 addons.go:70] Setting dashboard=true in profile "no-preload-983546"
	I1122 00:31:57.773665  252747 addons.go:239] Setting addon dashboard=true in "no-preload-983546"
	W1122 00:31:57.773672  252747 addons.go:248] addon dashboard should already be in state true
	I1122 00:31:57.773724  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.774045  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.774162  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.774249  252747 addons.go:70] Setting default-storageclass=true in profile "no-preload-983546"
	I1122 00:31:57.774273  252747 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983546"
	I1122 00:31:57.774573  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.776330  252747 out.go:179] * Verifying Kubernetes components...
	I1122 00:31:57.777645  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:57.804654  252747 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:31:57.805951  252747 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:31:57.805997  252747 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:31:54.885582  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:54.885967  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:54.886026  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:54.886095  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:54.912955  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:54.912974  218533 cri.go:89] found id: ""
	I1122 00:31:54.912983  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:54.913035  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:54.917400  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:54.917458  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:54.951913  218533 cri.go:89] found id: ""
	I1122 00:31:54.951937  218533 logs.go:282] 0 containers: []
	W1122 00:31:54.951947  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:54.951955  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:54.952009  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:54.982692  218533 cri.go:89] found id: ""
	I1122 00:31:54.982716  218533 logs.go:282] 0 containers: []
	W1122 00:31:54.982728  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:54.982735  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:54.982793  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:55.022244  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:55.022271  218533 cri.go:89] found id: ""
	I1122 00:31:55.022281  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:55.022340  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:55.027065  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:55.027145  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:55.053420  218533 cri.go:89] found id: ""
	I1122 00:31:55.053441  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.053451  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:55.053458  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:55.053519  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:55.084948  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:55.084972  218533 cri.go:89] found id: ""
	I1122 00:31:55.084982  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:31:55.085042  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:55.088797  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:55.088877  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:55.115033  218533 cri.go:89] found id: ""
	I1122 00:31:55.115077  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.115089  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:55.115097  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:55.115149  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:55.142914  218533 cri.go:89] found id: ""
	I1122 00:31:55.142941  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.142952  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:55.142966  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:55.142987  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:55.204133  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:55.204156  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:55.204173  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:55.241169  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:55.241201  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:55.296609  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:55.296636  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:55.323944  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:55.323973  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:55.380399  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:55.380425  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:55.415326  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:55.415353  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:55.511144  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:55.511176  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:58.028115  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:58.028583  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:58.028634  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:58.028682  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:58.078096  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:58.078119  218533 cri.go:89] found id: ""
	I1122 00:31:58.078128  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:58.078193  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.084665  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:58.084829  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:58.145151  218533 cri.go:89] found id: ""
	I1122 00:31:58.145177  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.145188  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:58.145195  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:58.145269  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:58.190884  218533 cri.go:89] found id: ""
	I1122 00:31:58.190913  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.190923  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:58.190931  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:58.190993  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:58.245494  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:58.245517  218533 cri.go:89] found id: ""
	I1122 00:31:58.245527  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:58.245596  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.251672  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:58.251741  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:58.295969  218533 cri.go:89] found id: ""
	I1122 00:31:58.295989  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.295999  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:58.296006  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:58.296070  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:58.351204  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:58.351228  218533 cri.go:89] found id: ""
	I1122 00:31:58.351238  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:31:58.351307  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.356276  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:58.356339  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:58.409482  218533 cri.go:89] found id: ""
	I1122 00:31:58.409506  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.409517  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:58.409524  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:58.409576  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:58.449491  218533 cri.go:89] found id: ""
	I1122 00:31:58.449635  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.449684  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:58.449729  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:58.449752  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:58.481719  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:58.481744  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:58.570885  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:58.570908  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:58.570923  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:58.620101  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:58.620130  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:58.702983  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:58.703015  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:58.740116  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:58.740145  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:58.837747  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:58.837780  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:58.880395  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:58.880426  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:57.806973  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:31:57.807000  252747 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:31:57.807042  252747 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:57.807081  252747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:31:57.807083  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.807130  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.817030  252747 addons.go:239] Setting addon default-storageclass=true in "no-preload-983546"
	W1122 00:31:57.817107  252747 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:31:57.817143  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.817705  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.854339  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:57.857949  252747 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:57.857968  252747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:31:57.858023  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.868177  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:57.892678  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:58.001275  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:58.014835  252747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:58.029324  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:31:58.029347  252747 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:31:58.077202  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:31:58.077239  252747 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:31:58.084967  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:58.124114  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:31:58.124657  252747 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:31:58.182865  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:31:58.182887  252747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:31:58.204969  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:31:58.204995  252747 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:31:58.239980  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:31:58.240013  252747 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:31:58.266916  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:31:58.266941  252747 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:31:58.288959  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:31:58.288983  252747 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:31:58.313673  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:31:58.313699  252747 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:31:58.336117  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:32:01.159349  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.158028615s)
	I1122 00:32:01.159417  252747 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.144545295s)
	I1122 00:32:01.159486  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.074497604s)
	I1122 00:32:01.159500  252747 node_ready.go:35] waiting up to 6m0s for node "no-preload-983546" to be "Ready" ...
	I1122 00:32:01.159583  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.823428245s)
	I1122 00:32:01.161277  252747 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-983546 addons enable metrics-server
	
	I1122 00:32:01.167926  252747 node_ready.go:49] node "no-preload-983546" is "Ready"
	I1122 00:32:01.167949  252747 node_ready.go:38] duration metric: took 8.413326ms for node "no-preload-983546" to be "Ready" ...
	I1122 00:32:01.167962  252747 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:32:01.168005  252747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:32:01.172509  252747 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1122 00:31:58.335809  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:00.826805  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:03.740361  250396 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:32:03.740437  250396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:32:03.740585  250396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:32:03.740671  250396 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:32:03.740718  250396 kubeadm.go:319] OS: Linux
	I1122 00:32:03.740799  250396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:32:03.740880  250396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:32:03.740956  250396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:32:03.741043  250396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:32:03.741155  250396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:32:03.741220  250396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:32:03.741303  250396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:32:03.741381  250396 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:32:03.741480  250396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:32:03.741631  250396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:32:03.741771  250396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:32:03.741860  250396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:32:03.743928  250396 out.go:252]   - Generating certificates and keys ...
	I1122 00:32:03.743995  250396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:32:03.744120  250396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:32:03.744232  250396 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:32:03.744291  250396 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:32:03.744352  250396 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:32:03.744395  250396 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:32:03.744472  250396 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:32:03.744645  250396 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-084979 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:32:03.744704  250396 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:32:03.744808  250396 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-084979 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:32:03.744871  250396 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:32:03.744932  250396 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:32:03.744972  250396 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:32:03.745021  250396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:32:03.745115  250396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:32:03.745180  250396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:32:03.745226  250396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:32:03.745300  250396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:32:03.745349  250396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:32:03.745423  250396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:32:03.745504  250396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:32:03.746554  250396 out.go:252]   - Booting up control plane ...
	I1122 00:32:03.746645  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:32:03.746736  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:32:03.746794  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:32:03.746895  250396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:32:03.746980  250396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:32:03.747098  250396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:32:03.747186  250396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:32:03.747220  250396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:32:03.747342  250396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:32:03.747434  250396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:32:03.747488  250396 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.911364ms
	I1122 00:32:03.747570  250396 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:32:03.747646  250396 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1122 00:32:03.747724  250396 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:32:03.747825  250396 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:32:03.747922  250396 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.665422295s
	I1122 00:32:03.747995  250396 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.549547667s
	I1122 00:32:03.748062  250396 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001391054s
	I1122 00:32:03.748174  250396 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:32:03.748301  250396 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:32:03.748362  250396 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:32:03.748554  250396 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-084979 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:32:03.748608  250396 kubeadm.go:319] [bootstrap-token] Using token: etvckh.upaww25zovv37fkt
	I1122 00:32:03.749809  250396 out.go:252]   - Configuring RBAC rules ...
	I1122 00:32:03.749920  250396 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:32:03.750019  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:32:03.750181  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:32:03.750395  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:32:03.750532  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:32:03.750663  250396 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:32:03.750828  250396 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:32:03.750890  250396 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:32:03.750959  250396 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:32:03.750968  250396 kubeadm.go:319] 
	I1122 00:32:03.751069  250396 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:32:03.751084  250396 kubeadm.go:319] 
	I1122 00:32:03.751199  250396 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:32:03.751211  250396 kubeadm.go:319] 
	I1122 00:32:03.751253  250396 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:32:03.751323  250396 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:32:03.751372  250396 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:32:03.751382  250396 kubeadm.go:319] 
	I1122 00:32:03.751448  250396 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:32:03.751455  250396 kubeadm.go:319] 
	I1122 00:32:03.751526  250396 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:32:03.751537  250396 kubeadm.go:319] 
	I1122 00:32:03.751617  250396 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:32:03.751731  250396 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:32:03.751814  250396 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:32:03.751823  250396 kubeadm.go:319] 
	I1122 00:32:03.751938  250396 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:32:03.752072  250396 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:32:03.752083  250396 kubeadm.go:319] 
	I1122 00:32:03.752202  250396 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etvckh.upaww25zovv37fkt \
	I1122 00:32:03.752361  250396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:32:03.752397  250396 kubeadm.go:319] 	--control-plane 
	I1122 00:32:03.752409  250396 kubeadm.go:319] 
	I1122 00:32:03.752486  250396 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:32:03.752493  250396 kubeadm.go:319] 
	I1122 00:32:03.752566  250396 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etvckh.upaww25zovv37fkt \
	I1122 00:32:03.752674  250396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:32:03.752687  250396 cni.go:84] Creating CNI manager for ""
	I1122 00:32:03.752693  250396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:03.753815  250396 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:32:01.530155  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:01.530629  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:01.530691  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:01.530751  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:01.573529  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:01.573606  218533 cri.go:89] found id: ""
	I1122 00:32:01.573630  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:01.573718  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.579772  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:01.579884  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:01.628480  218533 cri.go:89] found id: ""
	I1122 00:32:01.628508  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.628520  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:01.628527  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:01.628581  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:01.669551  218533 cri.go:89] found id: ""
	I1122 00:32:01.669590  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.669602  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:01.669610  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:01.669675  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:01.709664  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:01.709730  218533 cri.go:89] found id: ""
	I1122 00:32:01.709744  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:01.709807  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.716273  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:01.716338  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:01.757836  218533 cri.go:89] found id: ""
	I1122 00:32:01.757865  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.757877  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:01.757889  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:01.757948  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:01.807272  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:01.807295  218533 cri.go:89] found id: ""
	I1122 00:32:01.807306  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:01.807366  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.812630  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:01.812696  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:01.850551  218533 cri.go:89] found id: ""
	I1122 00:32:01.850589  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.850601  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:01.850609  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:01.850667  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:01.888142  218533 cri.go:89] found id: ""
	I1122 00:32:01.888172  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.888184  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:01.888196  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:01.888211  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:02.026356  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:02.026396  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:02.046765  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:02.046810  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:02.128377  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:02.128401  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:02.128416  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:02.170600  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:02.170631  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:02.242991  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:02.243019  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:02.278136  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:02.278167  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:02.353550  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:02.353590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:01.173529  252747 addons.go:530] duration metric: took 3.40010682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:32:01.181038  252747 api_server.go:72] duration metric: took 3.407850159s to wait for apiserver process to appear ...
	I1122 00:32:01.181069  252747 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:32:01.181088  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:01.185781  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:32:01.185823  252747 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:32:01.681192  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:01.687851  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:32:01.687879  252747 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:32:02.181208  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:02.186915  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:32:02.188326  252747 api_server.go:141] control plane version: v1.34.1
	I1122 00:32:02.188355  252747 api_server.go:131] duration metric: took 1.007277312s to wait for apiserver health ...
	I1122 00:32:02.188367  252747 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:32:02.192329  252747 system_pods.go:59] 8 kube-system pods found
	I1122 00:32:02.192365  252747 system_pods.go:61] "coredns-66bc5c9577-4psr2" [92a4504e-35be-4d9d-86ae-a574cc38590b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:32:02.192378  252747 system_pods.go:61] "etcd-no-preload-983546" [0da66ff3-f7cb-447e-b079-8f17012f75ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:32:02.192385  252747 system_pods.go:61] "kindnet-rpr2g" [59f42291-1016-4584-9fdb-5df09910070b] Running
	I1122 00:32:02.192399  252747 system_pods.go:61] "kube-apiserver-no-preload-983546" [e14c6fe3-b764-4f17-8f05-302c8ea76d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:32:02.192408  252747 system_pods.go:61] "kube-controller-manager-no-preload-983546" [5d5e6efd-fb84-4468-8672-2a926e4faa74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:32:02.192415  252747 system_pods.go:61] "kube-proxy-gnlfp" [0b842766-a9da-46e8-9259-f0cdca13c349] Running
	I1122 00:32:02.192425  252747 system_pods.go:61] "kube-scheduler-no-preload-983546" [7c10144e-6965-47c1-8047-1d6b81059de7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:32:02.192431  252747 system_pods.go:61] "storage-provisioner" [a6c69c5d-deb0-4c04-af56-6a7a594505ca] Running
	I1122 00:32:02.192441  252747 system_pods.go:74] duration metric: took 4.06574ms to wait for pod list to return data ...
	I1122 00:32:02.192449  252747 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:32:02.195085  252747 default_sa.go:45] found service account: "default"
	I1122 00:32:02.195106  252747 default_sa.go:55] duration metric: took 2.651035ms for default service account to be created ...
	I1122 00:32:02.195116  252747 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:32:02.198180  252747 system_pods.go:86] 8 kube-system pods found
	I1122 00:32:02.198203  252747 system_pods.go:89] "coredns-66bc5c9577-4psr2" [92a4504e-35be-4d9d-86ae-a574cc38590b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:32:02.198210  252747 system_pods.go:89] "etcd-no-preload-983546" [0da66ff3-f7cb-447e-b079-8f17012f75ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:32:02.198216  252747 system_pods.go:89] "kindnet-rpr2g" [59f42291-1016-4584-9fdb-5df09910070b] Running
	I1122 00:32:02.198224  252747 system_pods.go:89] "kube-apiserver-no-preload-983546" [e14c6fe3-b764-4f17-8f05-302c8ea76d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:32:02.198231  252747 system_pods.go:89] "kube-controller-manager-no-preload-983546" [5d5e6efd-fb84-4468-8672-2a926e4faa74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:32:02.198237  252747 system_pods.go:89] "kube-proxy-gnlfp" [0b842766-a9da-46e8-9259-f0cdca13c349] Running
	I1122 00:32:02.198245  252747 system_pods.go:89] "kube-scheduler-no-preload-983546" [7c10144e-6965-47c1-8047-1d6b81059de7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:32:02.198250  252747 system_pods.go:89] "storage-provisioner" [a6c69c5d-deb0-4c04-af56-6a7a594505ca] Running
	I1122 00:32:02.198267  252747 system_pods.go:126] duration metric: took 3.142921ms to wait for k8s-apps to be running ...
	I1122 00:32:02.198274  252747 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:32:02.198324  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:02.215177  252747 system_svc.go:56] duration metric: took 16.894051ms WaitForService to wait for kubelet
	I1122 00:32:02.215202  252747 kubeadm.go:587] duration metric: took 4.442016177s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:32:02.215222  252747 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:32:02.218392  252747 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:32:02.218421  252747 node_conditions.go:123] node cpu capacity is 8
	I1122 00:32:02.218440  252747 node_conditions.go:105] duration metric: took 3.212244ms to run NodePressure ...
	I1122 00:32:02.218457  252747 start.go:242] waiting for startup goroutines ...
	I1122 00:32:02.218471  252747 start.go:247] waiting for cluster config update ...
	I1122 00:32:02.218486  252747 start.go:256] writing updated cluster config ...
	I1122 00:32:02.218798  252747 ssh_runner.go:195] Run: rm -f paused
	I1122 00:32:02.223404  252747 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:32:02.226904  252747 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4psr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:32:04.232009  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:03.324593  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:05.325173  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:07.327505  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:03.754800  250396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:32:03.758923  250396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:32:03.758939  250396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:32:03.772321  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:32:03.995884  250396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:32:03.995992  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-084979 minikube.k8s.io/updated_at=2025_11_22T00_32_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=embed-certs-084979 minikube.k8s.io/primary=true
	I1122 00:32:03.995993  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:04.071731  250396 ops.go:34] apiserver oom_adj: -16
	I1122 00:32:04.071877  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:04.572611  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:05.072035  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:05.572343  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:06.071986  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:06.572988  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:07.072771  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:07.572366  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.072176  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.572186  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.658591  250396 kubeadm.go:1114] duration metric: took 4.662653726s to wait for elevateKubeSystemPrivileges
	I1122 00:32:08.658637  250396 kubeadm.go:403] duration metric: took 16.222848793s to StartCluster
	I1122 00:32:08.658668  250396 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:08.658754  250396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:32:08.661097  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:08.661390  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:32:08.661413  250396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:32:08.661465  250396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:32:08.661577  250396 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-084979"
	I1122 00:32:08.661605  250396 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-084979"
	I1122 00:32:08.661636  250396 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:32:08.661630  250396 addons.go:70] Setting default-storageclass=true in profile "embed-certs-084979"
	I1122 00:32:08.661655  250396 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:08.661676  250396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-084979"
	I1122 00:32:08.662134  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.662261  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.662773  250396 out.go:179] * Verifying Kubernetes components...
	I1122 00:32:08.665750  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:32:08.690424  250396 addons.go:239] Setting addon default-storageclass=true in "embed-certs-084979"
	I1122 00:32:08.690491  250396 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:32:08.690870  250396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:32:08.691123  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.692185  250396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:32:08.692207  250396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:32:08.692258  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:32:08.720362  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:32:08.728927  250396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:32:08.728956  250396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:32:08.729017  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:32:08.756027  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:32:08.777256  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:32:08.844110  250396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:32:08.845483  250396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:32:08.888146  250396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:32:08.991136  250396 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1122 00:32:08.994638  250396 node_ready.go:35] waiting up to 6m0s for node "embed-certs-084979" to be "Ready" ...
	I1122 00:32:09.263856  250396 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:32:04.897223  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:04.898234  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:04.898296  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:04.898366  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:04.937190  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:04.937217  218533 cri.go:89] found id: ""
	I1122 00:32:04.937228  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:04.937289  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:04.942473  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:04.942610  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:04.979203  218533 cri.go:89] found id: ""
	I1122 00:32:04.979231  218533 logs.go:282] 0 containers: []
	W1122 00:32:04.979242  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:04.979250  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:04.979312  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:05.016272  218533 cri.go:89] found id: ""
	I1122 00:32:05.016303  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.016315  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:05.016322  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:05.016381  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:05.052256  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:05.052288  218533 cri.go:89] found id: ""
	I1122 00:32:05.052299  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:05.052357  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:05.057464  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:05.057546  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:05.092269  218533 cri.go:89] found id: ""
	I1122 00:32:05.092294  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.092304  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:05.092312  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:05.092378  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:05.129968  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:05.129992  218533 cri.go:89] found id: ""
	I1122 00:32:05.130003  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:05.130087  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:05.135490  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:05.135553  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:05.170412  218533 cri.go:89] found id: ""
	I1122 00:32:05.170439  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.170450  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:05.170458  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:05.170518  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:05.209012  218533 cri.go:89] found id: ""
	I1122 00:32:05.209040  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.209075  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:05.209089  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:05.209104  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:05.252894  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:05.252929  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:05.321965  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:05.322005  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:05.356581  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:05.356616  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:05.443942  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:05.443983  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:05.482953  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:05.482985  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:05.621438  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:05.621484  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:05.640769  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:05.640808  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:05.714531  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:08.215942  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:08.216453  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:08.216518  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:08.216584  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:08.252920  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:08.252957  218533 cri.go:89] found id: ""
	I1122 00:32:08.252969  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:08.253034  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.258264  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:08.258331  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:08.292110  218533 cri.go:89] found id: ""
	I1122 00:32:08.292133  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.292146  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:08.292154  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:08.292213  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:08.325112  218533 cri.go:89] found id: ""
	I1122 00:32:08.325138  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.325149  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:08.325157  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:08.325214  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:08.357137  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:08.357162  218533 cri.go:89] found id: ""
	I1122 00:32:08.357174  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:08.357230  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.361362  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:08.361418  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:08.395728  218533 cri.go:89] found id: ""
	I1122 00:32:08.395759  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.395770  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:08.395778  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:08.395840  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:08.427682  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:08.427707  218533 cri.go:89] found id: ""
	I1122 00:32:08.427718  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:08.427777  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.432425  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:08.432487  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:08.463456  218533 cri.go:89] found id: ""
	I1122 00:32:08.463484  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.463494  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:08.463503  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:08.463565  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:08.498532  218533 cri.go:89] found id: ""
	I1122 00:32:08.498561  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.498578  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:08.498591  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:08.498611  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:08.538389  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:08.538421  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:08.611006  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:08.611060  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:08.643475  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:08.643510  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:08.747150  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:08.747238  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:08.795942  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:08.795979  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:08.931307  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:08.931347  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:08.946935  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:08.946965  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:09.033697  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1122 00:32:06.232766  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:08.233839  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:09.825147  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:12.322914  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:09.264969  250396 addons.go:530] duration metric: took 603.501615ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:32:09.497069  250396 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-084979" context rescaled to 1 replicas
	W1122 00:32:10.997553  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:11.535789  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:11.536195  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:11.536260  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:11.536317  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:11.564019  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:11.564042  218533 cri.go:89] found id: ""
	I1122 00:32:11.564085  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:11.564144  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.567867  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:11.567933  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:11.592888  218533 cri.go:89] found id: ""
	I1122 00:32:11.592910  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.592919  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:11.592926  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:11.592977  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:11.615551  218533 cri.go:89] found id: ""
	I1122 00:32:11.615573  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.615583  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:11.615590  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:11.615646  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:11.640041  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:11.640075  218533 cri.go:89] found id: ""
	I1122 00:32:11.640084  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:11.640127  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.643842  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:11.643888  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:11.667737  218533 cri.go:89] found id: ""
	I1122 00:32:11.667760  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.667769  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:11.667777  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:11.667829  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:11.692206  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:11.692227  218533 cri.go:89] found id: ""
	I1122 00:32:11.692236  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:11.692288  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.695688  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:11.695734  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:11.719309  218533 cri.go:89] found id: ""
	I1122 00:32:11.719330  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.719336  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:11.719341  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:11.719382  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:11.743535  218533 cri.go:89] found id: ""
	I1122 00:32:11.743558  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.743567  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:11.743577  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:11.743590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:11.798421  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:11.798443  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:11.798458  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:11.833336  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:11.833363  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:11.883020  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:11.883047  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:11.906415  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:11.906436  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:11.961581  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:11.961605  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:11.990349  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:11.990371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:12.073562  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:12.073590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:10.731437  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:12.732122  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:14.732512  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:14.322986  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:14.824010  246023 pod_ready.go:94] pod "coredns-5dd5756b68-lwzsc" is "Ready"
	I1122 00:32:14.824037  246023 pod_ready.go:86] duration metric: took 31.505715835s for pod "coredns-5dd5756b68-lwzsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.826639  246023 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.830991  246023 pod_ready.go:94] pod "etcd-old-k8s-version-377321" is "Ready"
	I1122 00:32:14.831015  246023 pod_ready.go:86] duration metric: took 4.355984ms for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.833840  246023 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.837703  246023 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-377321" is "Ready"
	I1122 00:32:14.837724  246023 pod_ready.go:86] duration metric: took 3.863315ms for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.840603  246023 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.022440  246023 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-377321" is "Ready"
	I1122 00:32:15.022464  246023 pod_ready.go:86] duration metric: took 181.838073ms for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.222995  246023 pod_ready.go:83] waiting for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.622529  246023 pod_ready.go:94] pod "kube-proxy-pz8cc" is "Ready"
	I1122 00:32:15.622552  246023 pod_ready.go:86] duration metric: took 399.533017ms for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.822978  246023 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:16.222462  246023 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-377321" is "Ready"
	I1122 00:32:16.222487  246023 pod_ready.go:86] duration metric: took 399.487283ms for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:16.222504  246023 pod_ready.go:40] duration metric: took 32.908075029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:32:16.265046  246023 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:32:16.266720  246023 out.go:203] 
	W1122 00:32:16.267945  246023 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:32:16.269101  246023 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:32:16.270254  246023 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-377321" cluster and "default" namespace by default
	W1122 00:32:13.497333  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:15.497785  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:17.997471  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:14.587256  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:14.587644  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:14.587699  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:14.587755  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:14.614678  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:14.614701  218533 cri.go:89] found id: ""
	I1122 00:32:14.614711  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:14.614768  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.618481  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:14.618536  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:14.643735  218533 cri.go:89] found id: ""
	I1122 00:32:14.643757  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.643766  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:14.643773  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:14.643822  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:14.669121  218533 cri.go:89] found id: ""
	I1122 00:32:14.669145  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.669155  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:14.669162  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:14.669221  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:14.694038  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:14.694085  218533 cri.go:89] found id: ""
	I1122 00:32:14.694095  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:14.694153  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.697687  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:14.697733  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:14.722140  218533 cri.go:89] found id: ""
	I1122 00:32:14.722159  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.722166  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:14.722171  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:14.722219  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:14.750643  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:14.750662  218533 cri.go:89] found id: ""
	I1122 00:32:14.750670  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:14.750718  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.754450  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:14.754501  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:14.780094  218533 cri.go:89] found id: ""
	I1122 00:32:14.780118  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.780127  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:14.780135  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:14.780191  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:14.806138  218533 cri.go:89] found id: ""
	I1122 00:32:14.806162  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.806174  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:14.806187  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:14.806203  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:14.819748  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:14.819774  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:14.876798  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:14.876833  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:14.876852  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:14.909027  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:14.909062  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:14.960970  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:14.960994  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:14.986818  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:14.986846  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:15.043330  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:15.043354  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:15.071710  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:15.071762  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:17.659860  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:17.660228  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:17.660291  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:17.660342  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:17.687605  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:17.687626  218533 cri.go:89] found id: ""
	I1122 00:32:17.687634  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:17.687679  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.691281  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:17.691334  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:17.715534  218533 cri.go:89] found id: ""
	I1122 00:32:17.715555  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.715560  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:17.715565  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:17.715604  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:17.740688  218533 cri.go:89] found id: ""
	I1122 00:32:17.740708  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.740717  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:17.740724  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:17.740771  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:17.765719  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:17.765743  218533 cri.go:89] found id: ""
	I1122 00:32:17.765753  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:17.765799  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.769489  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:17.769548  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:17.793906  218533 cri.go:89] found id: ""
	I1122 00:32:17.793929  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.793937  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:17.793944  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:17.794008  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:17.818834  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:17.818854  218533 cri.go:89] found id: ""
	I1122 00:32:17.818863  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:17.818917  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.822475  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:17.822530  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:17.847077  218533 cri.go:89] found id: ""
	I1122 00:32:17.847103  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.847113  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:17.847137  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:17.847186  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:17.873154  218533 cri.go:89] found id: ""
	I1122 00:32:17.873188  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.873199  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:17.873210  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:17.873222  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:17.928354  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:17.928378  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:17.928394  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:17.961215  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:17.961243  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:18.014092  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:18.014127  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:18.040632  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:18.040657  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:18.098144  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:18.098173  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:18.127668  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:18.127699  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:18.212045  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:18.212085  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:17.231619  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:19.731809  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:20.498174  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:22.997834  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:20.725824  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:20.726222  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:20.726275  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:20.726331  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:20.754921  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:20.754940  218533 cri.go:89] found id: ""
	I1122 00:32:20.754949  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:20.754995  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.758832  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:20.758879  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:20.783771  218533 cri.go:89] found id: ""
	I1122 00:32:20.783790  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.783797  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:20.783803  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:20.783856  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:20.810449  218533 cri.go:89] found id: ""
	I1122 00:32:20.810472  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.810480  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:20.810486  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:20.810543  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:20.837159  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:20.837181  218533 cri.go:89] found id: ""
	I1122 00:32:20.837190  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:20.837238  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.840845  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:20.840905  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:20.865439  218533 cri.go:89] found id: ""
	I1122 00:32:20.865467  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.865475  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:20.865481  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:20.865541  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:20.891345  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:20.891369  218533 cri.go:89] found id: ""
	I1122 00:32:20.891377  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:20.891418  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.895001  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:20.895104  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:20.921028  218533 cri.go:89] found id: ""
	I1122 00:32:20.921066  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.921076  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:20.921084  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:20.921137  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:20.947527  218533 cri.go:89] found id: ""
	I1122 00:32:20.947552  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.947562  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:20.947579  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:20.947593  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:21.043118  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:21.043149  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:21.058034  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:21.058111  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:21.116544  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:21.116566  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:21.116578  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:21.147804  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:21.147832  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:21.199577  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:21.199605  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:21.225224  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:21.225255  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:21.281329  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:21.281354  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:23.810365  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:23.810717  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:23.810772  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:23.810818  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:23.837384  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:23.837407  218533 cri.go:89] found id: ""
	I1122 00:32:23.837417  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:23.837466  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.841228  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:23.841300  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:23.865463  218533 cri.go:89] found id: ""
	I1122 00:32:23.865483  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.865490  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:23.865496  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:23.865538  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:23.891829  218533 cri.go:89] found id: ""
	I1122 00:32:23.891849  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.891856  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:23.891865  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:23.891924  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:23.917195  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:23.917220  218533 cri.go:89] found id: ""
	I1122 00:32:23.917231  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:23.917275  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.920785  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:23.920844  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:23.944914  218533 cri.go:89] found id: ""
	I1122 00:32:23.944936  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.944945  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:23.944951  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:23.944993  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:23.972047  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:23.972093  218533 cri.go:89] found id: ""
	I1122 00:32:23.972101  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:23.972143  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.975663  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:23.975714  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:24.000811  218533 cri.go:89] found id: ""
	I1122 00:32:24.000830  218533 logs.go:282] 0 containers: []
	W1122 00:32:24.000837  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:24.000843  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:24.000888  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:24.025467  218533 cri.go:89] found id: ""
	I1122 00:32:24.025484  218533 logs.go:282] 0 containers: []
	W1122 00:32:24.025491  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:24.025499  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:24.025510  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:24.077907  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:24.077926  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:24.077938  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:24.109386  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:24.109411  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:24.157948  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:24.157980  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:24.183206  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:24.183234  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:24.236823  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:24.236845  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:24.265620  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:24.265641  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:24.355847  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:24.355869  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:22.231718  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:24.231821  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:25.497582  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:27.997797  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:26.870191  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:26.870569  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:26.870618  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:26.870668  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:26.897294  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:26.897316  218533 cri.go:89] found id: ""
	I1122 00:32:26.897332  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:26.897379  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:26.901169  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:26.901224  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:26.925844  218533 cri.go:89] found id: ""
	I1122 00:32:26.925867  218533 logs.go:282] 0 containers: []
	W1122 00:32:26.925877  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:26.925885  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:26.925940  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:26.950625  218533 cri.go:89] found id: ""
	I1122 00:32:26.950650  218533 logs.go:282] 0 containers: []
	W1122 00:32:26.950660  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:26.950668  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:26.950712  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:26.976232  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:26.976252  218533 cri.go:89] found id: ""
	I1122 00:32:26.976261  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:26.976309  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:26.980027  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:26.980097  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:27.006262  218533 cri.go:89] found id: ""
	I1122 00:32:27.006287  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.006297  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:27.006305  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:27.006355  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:27.031280  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:27.031301  218533 cri.go:89] found id: ""
	I1122 00:32:27.031308  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:27.031356  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:27.034880  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:27.034936  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:27.059733  218533 cri.go:89] found id: ""
	I1122 00:32:27.059750  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.059756  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:27.059762  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:27.059813  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:27.084321  218533 cri.go:89] found id: ""
	I1122 00:32:27.084353  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.084362  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:27.084373  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:27.084391  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:27.136326  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:27.136349  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:27.164195  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:27.164223  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:27.246634  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:27.246659  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:27.260429  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:27.260454  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:27.315384  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:27.315403  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:27.315416  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:27.348407  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:27.348429  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:27.399816  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:27.399841  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	
	
	==> CRI-O <==
	Nov 22 00:32:03 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:03.937956137Z" level=info msg="Started container" PID=1748 containerID=2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper id=1bbb047a-fde6-4a35-be56-7d908fc95c82 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6feca78516ad50e4fe97b3f97bada918800401550ffa3af28e6adfb968d1c990
	Nov 22 00:32:04 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:04.901781164Z" level=info msg="Removing container: cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817" id=512ef12d-4f13-40d9-9675-0c02c8ade803 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:04 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:04.917023346Z" level=info msg="Removed container cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=512ef12d-4f13-40d9-9675-0c02c8ade803 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.920959768Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=74ddf206-f1c0-4267-86aa-16ec98b17296 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.921884831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c7ab721a-703f-4b70-b934-d302f35996a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.922866705Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6efebf0d-977b-4b97-9cb9-b5a76a6f4b49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.922970221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.926845936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.92698979Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ae7b46de1f51678fa2b4142f4cab6c3b8cb8118eaf7ad88f9d72617d63b3070/merged/etc/passwd: no such file or directory"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.927012386Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ae7b46de1f51678fa2b4142f4cab6c3b8cb8118eaf7ad88f9d72617d63b3070/merged/etc/group: no such file or directory"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.927286304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.956427253Z" level=info msg="Created container 93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975: kube-system/storage-provisioner/storage-provisioner" id=6efebf0d-977b-4b97-9cb9-b5a76a6f4b49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.956985812Z" level=info msg="Starting container: 93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975" id=e32c6915-22e0-4597-b5fd-572efba84f3d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.95864215Z" level=info msg="Started container" PID=1766 containerID=93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975 description=kube-system/storage-provisioner/storage-provisioner id=e32c6915-22e0-4597-b5fd-572efba84f3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8da5c1c27d10fbc44e9019ce3c31b6daa2edfea930125bffb844fd602aab24d2
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.797532833Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52b1f0ca-3bd7-43ee-9cd7-d494f4e8a14e name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.798546028Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a74c662-ef34-4ff8-921c-6f1663b1ab96 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.799519829Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=8caaeada-cdf1-495e-bb8a-486b0a00325d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.799630983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.804980957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.805482957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.831622745Z" level=info msg="Created container 1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=8caaeada-cdf1-495e-bb8a-486b0a00325d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.832175097Z" level=info msg="Starting container: 1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d" id=f8258c47-fb18-428e-9777-1a65ae5ffea0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.8338897Z" level=info msg="Started container" PID=1804 containerID=1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper id=f8258c47-fb18-428e-9777-1a65ae5ffea0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6feca78516ad50e4fe97b3f97bada918800401550ffa3af28e6adfb968d1c990
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.941797858Z" level=info msg="Removing container: 2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb" id=d94e18e6-594f-460b-a789-34591e73ee8e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.953486809Z" level=info msg="Removed container 2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=d94e18e6-594f-460b-a789-34591e73ee8e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1159f8806d56e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   2                   6feca78516ad5       dashboard-metrics-scraper-5f989dc9cf-mj7xq       kubernetes-dashboard
	93bfe3b02b302       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   8da5c1c27d10f       storage-provisioner                              kube-system
	d3398b58126a8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   29 seconds ago      Running             kubernetes-dashboard        0                   4523b7bd40065       kubernetes-dashboard-8694d4445c-8fvls            kubernetes-dashboard
	801c8d5d08f56       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   95d1eb32ca6c8       coredns-5dd5756b68-lwzsc                         kube-system
	dd56e9f3efdf1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   d1a817c1ea701       busybox                                          default
	6a1f00984a7df       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   6273ec66b8807       kube-proxy-pz8cc                                 kube-system
	570f113a27a51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   c932b507c6e5c       kindnet-f996p                                    kube-system
	6fd900059ec31       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   8da5c1c27d10f       storage-provisioner                              kube-system
	0c7b31cf741c7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   af7a1f88de16b       etcd-old-k8s-version-377321                      kube-system
	5819251d36741       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   b9646e1892734       kube-apiserver-old-k8s-version-377321            kube-system
	ed98561b5f5ab       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   2b81970a3e077       kube-controller-manager-old-k8s-version-377321   kube-system
	ab6a019fd3f49       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   ff76ff91af545       kube-scheduler-old-k8s-version-377321            kube-system
	
	
	==> coredns [801c8d5d08f560e17fd4023d35002a9afed8af82fe042078f52484439238fd06] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54212 - 636 "HINFO IN 2872590925639124345.4534167897300206030. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085809203s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-377321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-377321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-377321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:30:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-377321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:31:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-377321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                6461bf81-9141-4b24-bd64-39ea1ba5c316
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 coredns-5dd5756b68-lwzsc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-old-k8s-version-377321                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-f996p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      100s
	  kube-system                 kube-apiserver-old-k8s-version-377321             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-old-k8s-version-377321    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-pz8cc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-old-k8s-version-377321             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mj7xq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8fvls             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node old-k8s-version-377321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           101s               node-controller  Node old-k8s-version-377321 event: Registered Node old-k8s-version-377321 in Controller
	  Normal  NodeReady                86s                kubelet          Node old-k8s-version-377321 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-377321 event: Registered Node old-k8s-version-377321 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [0c7b31cf741c7a5491efff25f26daaf7e50f1b38c7b0275cb2a437a4babfc650] <==
	{"level":"info","ts":"2025-11-22T00:31:39.369441Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:31:39.369471Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:31:39.369889Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:31:39.369505Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:31:40.856898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.856994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.857003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.857009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.858077Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-377321 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:31:40.858105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:31:40.85809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:31:40.858271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:31:40.858321Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:31:40.859461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:31:40.859463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-22T00:31:47.090279Z","caller":"traceutil/trace.go:171","msg":"trace[1422261589] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"130.380418ms","start":"2025-11-22T00:31:46.95988Z","end":"2025-11-22T00:31:47.090261Z","steps":["trace[1422261589] 'process raft request'  (duration: 124.483475ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:31:47.345805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.364151ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221089893766 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" value_size:658 lease:499225184235117955 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:31:47.345921Z","caller":"traceutil/trace.go:171","msg":"trace[1590494862] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"252.994334ms","start":"2025-11-22T00:31:47.092905Z","end":"2025-11-22T00:31:47.345899Z","steps":["trace[1590494862] 'process raft request'  (duration: 122.270217ms)","trace[1590494862] 'compare'  (duration: 130.274261ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:47.614141Z","caller":"traceutil/trace.go:171","msg":"trace[1548252938] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"176.753813ms","start":"2025-11-22T00:31:47.437362Z","end":"2025-11-22T00:31:47.614116Z","steps":["trace[1548252938] 'process raft request'  (duration: 130.305911ms)","trace[1548252938] 'compare'  (duration: 46.356016ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:47.826767Z","caller":"traceutil/trace.go:171","msg":"trace[55112707] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"208.948211ms","start":"2025-11-22T00:31:47.617796Z","end":"2025-11-22T00:31:47.826744Z","steps":["trace[55112707] 'process raft request'  (duration: 129.022116ms)","trace[55112707] 'compare'  (duration: 79.82714ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:31:48.097469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.49479ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221089893780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" mod_revision:462 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" value_size:658 lease:499225184235117955 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:31:48.097534Z","caller":"traceutil/trace.go:171","msg":"trace[461512084] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"261.090269ms","start":"2025-11-22T00:31:47.836434Z","end":"2025-11-22T00:31:48.097524Z","steps":["trace[461512084] 'process raft request'  (duration: 144.383619ms)","trace[461512084] 'compare'  (duration: 116.380233ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:32:30 up  1:14,  0 user,  load average: 3.32, 3.04, 1.86
	Linux old-k8s-version-377321 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [570f113a27a5135a9bb473c8bdf01eb25f09ab8108a4e98dd642e15f17472989] <==
	I1122 00:31:43.439418       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:31:43.439685       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:31:43.439865       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:31:43.439883       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:31:43.439911       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:31:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:31:43.642659       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:31:43.642729       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:31:43.642746       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:31:43.735352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:31:44.171898       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:31:44.171936       1 metrics.go:72] Registering metrics
	I1122 00:31:44.172006       1 controller.go:711] "Syncing nftables rules"
	I1122 00:31:53.643176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:31:53.643239       1 main.go:301] handling current node
	I1122 00:32:03.643166       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:03.643201       1 main.go:301] handling current node
	I1122 00:32:13.643144       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:13.643175       1 main.go:301] handling current node
	I1122 00:32:23.643003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:23.643035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5819251d36741016f113d53581c7c528ace5865eeb58ffe60e69f44d077e7cd2] <==
	I1122 00:31:41.756588       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 00:31:41.851128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:31:41.851144       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:31:41.851673       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1122 00:31:41.852648       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:31:41.852870       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:31:41.852897       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:31:41.852908       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:31:41.852919       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:31:41.852927       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:31:41.854174       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1122 00:31:41.854190       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:31:41.865398       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:31:41.881510       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:31:42.641992       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:31:42.676434       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:31:42.695540       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:31:42.702730       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:31:42.712532       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:31:42.751563       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.229.37"}
	I1122 00:31:42.754044       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:31:42.767562       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.161.21"}
	I1122 00:31:54.771674       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1122 00:31:54.923253       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:31:55.024393       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ed98561b5f5aba5d27a95290d74bdb9ae0ac348ec62233efd0e83b347c5ad42b] <==
	I1122 00:31:54.775743       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1122 00:31:54.776778       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1122 00:31:54.985411       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8fvls"
	I1122 00:31:54.987756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="469.375773ms"
	I1122 00:31:54.988499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.345µs"
	I1122 00:31:54.989242       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	I1122 00:31:54.991419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="216.038779ms"
	I1122 00:31:54.998491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="221.317474ms"
	I1122 00:31:55.003758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.223447ms"
	I1122 00:31:55.003840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.82µs"
	I1122 00:31:55.008404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.754198ms"
	I1122 00:31:55.008494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.187µs"
	I1122 00:31:55.020681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.805µs"
	I1122 00:31:55.039678       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:31:55.107476       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:31:55.107512       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:32:01.915344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.099813ms"
	I1122 00:32:01.915974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="562.153µs"
	I1122 00:32:03.904658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.662µs"
	I1122 00:32:04.917677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="600.842µs"
	I1122 00:32:05.914209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.516µs"
	I1122 00:32:14.544005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.429727ms"
	I1122 00:32:14.544140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.39µs"
	I1122 00:32:21.950834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.581µs"
	I1122 00:32:25.907899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="105.132µs"
	
	
	==> kube-proxy [6a1f00984a7dff4ce68585b4b0994ccd7b263abf46aef826150cbb2693c2b895] <==
	I1122 00:31:43.233815       1 server_others.go:69] "Using iptables proxy"
	I1122 00:31:43.247029       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:31:43.266393       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:31:43.268780       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:31:43.268806       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:31:43.268812       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:31:43.268839       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:31:43.269128       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:31:43.269195       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:31:43.269826       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:31:43.269832       1 config.go:188] "Starting service config controller"
	I1122 00:31:43.269874       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:31:43.269876       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:31:43.269960       1 config.go:315] "Starting node config controller"
	I1122 00:31:43.269969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:31:43.370165       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:31:43.370196       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:31:43.370165       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab6a019fd3f49e6fab48be38ce5872af37de1804d6bf8f07d05a6d98aaedd575] <==
	I1122 00:31:39.948334       1 serving.go:348] Generated self-signed cert in-memory
	W1122 00:31:41.765635       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:31:41.765669       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:31:41.765682       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:31:41.765690       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:31:41.789402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:31:41.789486       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:31:41.791902       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:31:41.791954       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:31:41.794213       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:31:41.794277       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:31:41.892544       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:31:54 old-k8s-version-377321 kubelet[727]: I1122 00:31:54.996461     727 topology_manager.go:215] "Topology Admit Handler" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059588     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/80fdd4a9-2931-48e7-8084-644a5da2b47b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8fvls\" (UID: \"80fdd4a9-2931-48e7-8084-644a5da2b47b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059649     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7455bdaf-04f4-4187-a0e5-e2633acf1e1e-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mj7xq\" (UID: \"7455bdaf-04f4-4187-a0e5-e2633acf1e1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059693     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94nk\" (UniqueName: \"kubernetes.io/projected/7455bdaf-04f4-4187-a0e5-e2633acf1e1e-kube-api-access-v94nk\") pod \"dashboard-metrics-scraper-5f989dc9cf-mj7xq\" (UID: \"7455bdaf-04f4-4187-a0e5-e2633acf1e1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059839     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75nt\" (UniqueName: \"kubernetes.io/projected/80fdd4a9-2931-48e7-8084-644a5da2b47b-kube-api-access-g75nt\") pod \"kubernetes-dashboard-8694d4445c-8fvls\" (UID: \"80fdd4a9-2931-48e7-8084-644a5da2b47b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls"
	Nov 22 00:32:03 old-k8s-version-377321 kubelet[727]: I1122 00:32:03.893380     727 scope.go:117] "RemoveContainer" containerID="cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817"
	Nov 22 00:32:03 old-k8s-version-377321 kubelet[727]: I1122 00:32:03.904586     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls" podStartSLOduration=4.669567495 podCreationTimestamp="2025-11-22 00:31:54 +0000 UTC" firstStartedPulling="2025-11-22 00:31:55.921048277 +0000 UTC m=+17.215118139" lastFinishedPulling="2025-11-22 00:32:01.156015745 +0000 UTC m=+22.450085596" observedRunningTime="2025-11-22 00:32:01.905762545 +0000 UTC m=+23.199832413" watchObservedRunningTime="2025-11-22 00:32:03.904534952 +0000 UTC m=+25.198604824"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: I1122 00:32:04.898567     727 scope.go:117] "RemoveContainer" containerID="cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: I1122 00:32:04.898900     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: E1122 00:32:04.899526     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:05 old-k8s-version-377321 kubelet[727]: I1122 00:32:05.902598     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:05 old-k8s-version-377321 kubelet[727]: E1122 00:32:05.902962     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:06 old-k8s-version-377321 kubelet[727]: I1122 00:32:06.905338     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:06 old-k8s-version-377321 kubelet[727]: E1122 00:32:06.905625     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:13 old-k8s-version-377321 kubelet[727]: I1122 00:32:13.920534     727 scope.go:117] "RemoveContainer" containerID="6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.796955     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.940627     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.940829     727 scope.go:117] "RemoveContainer" containerID="1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: E1122 00:32:21.941221     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:25 old-k8s-version-377321 kubelet[727]: I1122 00:32:25.898833     727 scope.go:117] "RemoveContainer" containerID="1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	Nov 22 00:32:25 old-k8s-version-377321 kubelet[727]: E1122 00:32:25.899229     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: kubelet.service: Consumed 1.369s CPU time.
	
	
	==> kubernetes-dashboard [d3398b58126a8fcaaa90af41bb9b636f054fe29a545311e069c0bf53e69969c0] <==
	2025/11/22 00:32:01 Starting overwatch
	2025/11/22 00:32:01 Using namespace: kubernetes-dashboard
	2025/11/22 00:32:01 Using in-cluster config to connect to apiserver
	2025/11/22 00:32:01 Using secret token for csrf signing
	2025/11/22 00:32:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:32:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:32:01 Successful initial request to the apiserver, version: v1.28.0
	2025/11/22 00:32:01 Generating JWE encryption key
	2025/11/22 00:32:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:32:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:32:01 Initializing JWE encryption key from synchronized object
	2025/11/22 00:32:01 Creating in-cluster Sidecar client
	2025/11/22 00:32:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:01 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f] <==
	I1122 00:31:43.173344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:32:13.176295       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975] <==
	I1122 00:32:13.969873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:32:13.977928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:32:13.977969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-377321 -n old-k8s-version-377321: exit status 2 (322.326912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-377321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-377321
helpers_test.go:243: (dbg) docker inspect old-k8s-version-377321:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	        "Created": "2025-11-22T00:30:25.888209771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246332,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:31:32.723349671Z",
	            "FinishedAt": "2025-11-22T00:31:31.817793072Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/hosts",
	        "LogPath": "/var/lib/docker/containers/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1/dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1-json.log",
	        "Name": "/old-k8s-version-377321",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-377321:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-377321",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dffbefc5635f5e12082532a9ddb8dae95b35a43f9ae00cf681a38814017e28e1",
	                "LowerDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ac3be49161fcc219b6798f3ada9e8452efb3f889f36e7992c217a75468d65a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-377321",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-377321/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-377321",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-377321",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a48eae4638b94dfb5d133b2065952c7c968095d0c86ef9b9429c6276dbb06902",
	            "SandboxKey": "/var/run/docker/netns/a48eae4638b9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-377321": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "476dc93872199ad7652e7290a0113d19cf28252d1edac64765d412bab275e357",
	                    "EndpointID": "a7a16947fb8e2b17fe29195e5b2420526809458790ab58dd6c8eb2c8b97d99de",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "c6:00:ec:5c:a2:af",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-377321",
	                        "dffbefc5635f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321: exit status 2 (310.75331ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-377321 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-377321 logs -n 25: (1.04357049s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ ssh     │ cert-options-524062 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p cert-options-524062 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ delete  │ -p cert-options-524062                                                                                                                                                                                                                        │ cert-options-524062    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061    │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739 │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979     │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546      │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:31:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:31:50.613786  252747 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:31:50.613899  252747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:50.613911  252747 out.go:374] Setting ErrFile to fd 2...
	I1122 00:31:50.613916  252747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:31:50.614172  252747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:31:50.614647  252747 out.go:368] Setting JSON to false
	I1122 00:31:50.615814  252747 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4460,"bootTime":1763767051,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:31:50.615873  252747 start.go:143] virtualization: kvm guest
	I1122 00:31:50.617870  252747 out.go:179] * [no-preload-983546] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:31:50.619124  252747 notify.go:221] Checking for updates...
	I1122 00:31:50.619164  252747 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:31:50.620473  252747 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:31:50.621715  252747 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:50.622926  252747 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:31:50.623998  252747 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:31:50.625079  252747 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:31:50.626775  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:50.627519  252747 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:31:50.653690  252747 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:31:50.653793  252747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:50.720537  252747 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:31:50.710138927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:50.720702  252747 docker.go:319] overlay module found
	I1122 00:31:50.722520  252747 out.go:179] * Using the docker driver based on existing profile
	I1122 00:31:50.723640  252747 start.go:309] selected driver: docker
	I1122 00:31:50.723664  252747 start.go:930] validating driver "docker" against &{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:50.723763  252747 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:31:50.724302  252747 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:31:50.785041  252747 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:31:50.775165835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:31:50.785404  252747 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:31:50.785436  252747 cni.go:84] Creating CNI manager for ""
	I1122 00:31:50.785505  252747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:50.785549  252747 start.go:353] cluster config:
	{Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:50.787349  252747 out.go:179] * Starting "no-preload-983546" primary control-plane node in "no-preload-983546" cluster
	I1122 00:31:50.792004  252747 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:31:50.793295  252747 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:31:50.794564  252747 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:50.794665  252747 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:31:50.794683  252747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:31:50.794898  252747 cache.go:107] acquiring lock: {Name:mk4b1b351b6e05df924b1dea34823a5bae874e1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794939  252747 cache.go:107] acquiring lock: {Name:mk2e1ee991a04da9a748a7199e1558e3e5412fee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794973  252747 cache.go:107] acquiring lock: {Name:mk6d624ce3b8b502967383fd9c495ee3efa5f0d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794927  252747 cache.go:107] acquiring lock: {Name:mkcfead1c087753e04498b19f3a6339bfee4e556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.794974  252747 cache.go:107] acquiring lock: {Name:mkeb32bd396caf88f92b976cb818c75db7b8b2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795024  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:31:50.795027  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:31:50.795034  252747 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 133.763µs
	I1122 00:31:50.795035  252747 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 64.826µs
	I1122 00:31:50.795049  252747 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:31:50.795062  252747 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:31:50.795015  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:31:50.795078  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:31:50.795085  252747 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 150.824µs
	I1122 00:31:50.795087  252747 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 201.15µs
	I1122 00:31:50.795093  252747 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:31:50.795115  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:31:50.795040  252747 cache.go:107] acquiring lock: {Name:mk12d63b3212c690b6dceb2e93efe384169c5870 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795133  252747 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 204.734µs
	I1122 00:31:50.795125  252747 cache.go:107] acquiring lock: {Name:mk0912b033af5e0dc6737ad3b2b166867675943b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795152  252747 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:31:50.795095  252747 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:31:50.795156  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:31:50.795184  252747 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 193.649µs
	I1122 00:31:50.795193  252747 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:31:50.795030  252747 cache.go:107] acquiring lock: {Name:mk96320d9e02559e4fb5bcee79e63af23abf6b0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.795245  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:31:50.795257  252747 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 230.645µs
	I1122 00:31:50.795270  252747 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:31:50.795319  252747 cache.go:115] /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:31:50.795342  252747 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 259.065µs
	I1122 00:31:50.795357  252747 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-9122/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:31:50.795371  252747 cache.go:87] Successfully saved all images to host disk.
	I1122 00:31:50.825409  252747 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:31:50.825435  252747 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:31:50.825462  252747 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:31:50.825496  252747 start.go:360] acquireMachinesLock for no-preload-983546: {Name:mk180ef84c85822552d32d9baa5d4747338a2875 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:31:50.825576  252747 start.go:364] duration metric: took 56.588µs to acquireMachinesLock for "no-preload-983546"
	I1122 00:31:50.825605  252747 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:31:50.825616  252747 fix.go:54] fixHost starting: 
	I1122 00:31:50.825975  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:50.846644  252747 fix.go:112] recreateIfNeeded on no-preload-983546: state=Stopped err=<nil>
	W1122 00:31:50.846687  252747 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:31:48.142519  250396 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-084979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.290820647s)
	I1122 00:31:48.142557  250396 kic.go:203] duration metric: took 4.290978466s to extract preloaded images to volume ...
	W1122 00:31:48.142663  250396 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:31:48.142708  250396 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:31:48.142755  250396 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:31:48.205487  250396 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-084979 --name embed-certs-084979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-084979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-084979 --network embed-certs-084979 --ip 192.168.94.2 --volume embed-certs-084979:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:31:48.500341  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Running}}
	I1122 00:31:48.518709  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.536654  250396 cli_runner.go:164] Run: docker exec embed-certs-084979 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:31:48.585157  250396 oci.go:144] the created container "embed-certs-084979" has a running status.
	I1122 00:31:48.585190  250396 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa...
	I1122 00:31:48.825142  250396 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:31:48.854801  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.875986  250396 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:31:48.876014  250396 kic_runner.go:114] Args: [docker exec --privileged embed-certs-084979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:31:48.926633  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:31:48.944315  250396 machine.go:94] provisionDockerMachine start ...
	I1122 00:31:48.944393  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:48.962453  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:48.962805  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:48.962836  250396 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:31:49.093426  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:31:49.093460  250396 ubuntu.go:182] provisioning hostname "embed-certs-084979"
	I1122 00:31:49.093553  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.114572  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.114795  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.114808  250396 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-084979 && echo "embed-certs-084979" | sudo tee /etc/hostname
	I1122 00:31:49.250649  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:31:49.250730  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.271274  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.271583  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.271610  250396 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-084979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-084979/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-084979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:31:49.391218  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:31:49.391319  250396 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:31:49.391369  250396 ubuntu.go:190] setting up certificates
	I1122 00:31:49.391380  250396 provision.go:84] configureAuth start
	I1122 00:31:49.391428  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:49.407849  250396 provision.go:143] copyHostCerts
	I1122 00:31:49.407897  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:31:49.407905  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:31:49.407968  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:31:49.408065  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:31:49.408077  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:31:49.408115  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:31:49.408181  250396 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:31:49.408189  250396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:31:49.408220  250396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:31:49.408277  250396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.embed-certs-084979 san=[127.0.0.1 192.168.94.2 embed-certs-084979 localhost minikube]
	I1122 00:31:49.482513  250396 provision.go:177] copyRemoteCerts
	I1122 00:31:49.482567  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:31:49.482599  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.499242  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:49.589528  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:31:49.607716  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:31:49.624611  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:31:49.641681  250396 provision.go:87] duration metric: took 250.291766ms to configureAuth
	I1122 00:31:49.641704  250396 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:31:49.641865  250396 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:49.641969  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.658924  250396 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:49.659163  250396 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:31:49.659186  250396 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:31:49.909146  250396 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:31:49.909183  250396 machine.go:97] duration metric: took 964.846655ms to provisionDockerMachine
	I1122 00:31:49.909196  250396 client.go:176] duration metric: took 6.612185161s to LocalClient.Create
	I1122 00:31:49.909218  250396 start.go:167] duration metric: took 6.612254944s to libmachine.API.Create "embed-certs-084979"
	I1122 00:31:49.909228  250396 start.go:293] postStartSetup for "embed-certs-084979" (driver="docker")
	I1122 00:31:49.909242  250396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:31:49.909315  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:31:49.909391  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:49.926710  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.021185  250396 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:31:50.024665  250396 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:31:50.024700  250396 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:31:50.024716  250396 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:31:50.024763  250396 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:31:50.024833  250396 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:31:50.024916  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:31:50.032263  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:50.051194  250396 start.go:296] duration metric: took 141.953441ms for postStartSetup
	I1122 00:31:50.051556  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:50.070490  250396 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:31:50.070736  250396 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:31:50.070774  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.087432  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.174700  250396 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:31:50.179003  250396 start.go:128] duration metric: took 6.884067221s to createHost
	I1122 00:31:50.179029  250396 start.go:83] releasing machines lock for "embed-certs-084979", held for 6.884211229s
	I1122 00:31:50.179125  250396 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:31:50.197076  250396 ssh_runner.go:195] Run: cat /version.json
	I1122 00:31:50.197143  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.197081  250396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:31:50.197259  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:31:50.216181  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.216448  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:31:50.405463  250396 ssh_runner.go:195] Run: systemctl --version
	I1122 00:31:50.412314  250396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:31:50.449538  250396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:31:50.454257  250396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:31:50.454321  250396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:31:50.481373  250396 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:31:50.481395  250396 start.go:496] detecting cgroup driver to use...
	I1122 00:31:50.481423  250396 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:31:50.481468  250396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:31:50.496946  250396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:31:50.509639  250396 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:31:50.509691  250396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:31:50.529078  250396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:31:50.546653  250396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:31:50.641041  250396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:31:50.747548  250396 docker.go:234] disabling docker service ...
	I1122 00:31:50.747616  250396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:31:50.771023  250396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:31:50.785391  250396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:31:50.873942  250396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:31:50.956488  250396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:31:50.970225  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:31:50.988710  250396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:31:50.988779  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:50.999173  250396 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:31:50.999240  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.009863  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.018586  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.027048  250396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:31:51.035385  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.043855  250396 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.057140  250396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:51.066136  250396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:31:51.074109  250396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:31:51.082237  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:51.176401  250396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:31:51.314780  250396 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:31:51.314840  250396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:31:51.318722  250396 start.go:564] Will wait 60s for crictl version
	I1122 00:31:51.318783  250396 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.322892  250396 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:31:51.351139  250396 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:31:51.351239  250396 ssh_runner.go:195] Run: crio --version
	I1122 00:31:51.382701  250396 ssh_runner.go:195] Run: crio --version
	I1122 00:31:51.420185  250396 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:31:49.823420  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:31:51.824531  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:31:51.421367  250396 cli_runner.go:164] Run: docker network inspect embed-certs-084979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:31:51.440897  250396 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:31:51.444989  250396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:51.455982  250396 kubeadm.go:884] updating cluster {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:31:51.456177  250396 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:51.456229  250396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:51.489550  250396 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:51.489569  250396 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:31:51.489613  250396 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:51.513343  250396 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:51.513366  250396 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:31:51.513375  250396 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:31:51.513477  250396 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-084979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:31:51.513562  250396 ssh_runner.go:195] Run: crio config
	I1122 00:31:51.555997  250396 cni.go:84] Creating CNI manager for ""
	I1122 00:31:51.556025  250396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:51.556042  250396 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:31:51.556092  250396 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-084979 NodeName:embed-certs-084979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:31:51.556218  250396 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-084979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:31:51.556274  250396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:31:51.564046  250396 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:31:51.564132  250396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:31:51.571550  250396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:31:51.583692  250396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:31:51.598125  250396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1122 00:31:51.610121  250396 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:31:51.613403  250396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:51.622567  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:51.701252  250396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:51.725101  250396 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979 for IP: 192.168.94.2
	I1122 00:31:51.725122  250396 certs.go:195] generating shared ca certs ...
	I1122 00:31:51.725143  250396 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.725324  250396 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:31:51.725375  250396 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:31:51.725395  250396 certs.go:257] generating profile certs ...
	I1122 00:31:51.725464  250396 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key
	I1122 00:31:51.725481  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt with IP's: []
	I1122 00:31:51.785187  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt ...
	I1122 00:31:51.785211  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.crt: {Name:mk830ed4fcb985c65a974ee02d16ac0f9d685d6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.785367  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key ...
	I1122 00:31:51.785379  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key: {Name:mk653952efc7ac0956717f9b7e36d389ed0e2a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.785457  250396 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b
	I1122 00:31:51.785471  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:31:51.999382  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b ...
	I1122 00:31:51.999405  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b: {Name:mkb6532e83c26df6540d503cab858cd41d31a97d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.999570  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b ...
	I1122 00:31:51.999584  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b: {Name:mk1a38902bbc52c78732928f3b3e47dae7e2ccc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:51.999662  250396 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt.07b0558b -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt
	I1122 00:31:51.999745  250396 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key
	I1122 00:31:51.999833  250396 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key
	I1122 00:31:51.999853  250396 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt with IP's: []
	I1122 00:31:52.055968  250396 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt ...
	I1122 00:31:52.055992  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt: {Name:mk03d889db74e292f9976d617aa05998cb02e66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:52.056171  250396 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key ...
	I1122 00:31:52.056189  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key: {Name:mk4c9aa5f036245d68274405484e9ac87026c161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:52.056392  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:31:52.056432  250396 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:31:52.056444  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:31:52.056470  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:31:52.056495  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:31:52.056520  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:31:52.056572  250396 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:52.057206  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:31:52.075454  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:31:52.093570  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:31:52.111215  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:31:52.128121  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:31:52.145590  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:31:52.161883  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:31:52.178682  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:31:52.196086  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:31:52.213742  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:31:52.230306  250396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:31:52.246583  250396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:31:52.258573  250396 ssh_runner.go:195] Run: openssl version
	I1122 00:31:52.264445  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:31:52.273043  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.276979  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.277025  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:52.313103  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:31:52.321892  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:31:52.330157  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.333555  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.333604  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:31:52.367422  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:31:52.375302  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:31:52.384128  250396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.387751  250396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.387800  250396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:31:52.423863  250396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:31:52.432456  250396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:31:52.435730  250396 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:31:52.435791  250396 kubeadm.go:401] StartCluster: {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:52.435860  250396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:31:52.435913  250396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:31:52.461819  250396 cri.go:89] found id: ""
	I1122 00:31:52.461886  250396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:31:52.469168  250396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:31:52.476311  250396 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:31:52.476361  250396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:31:52.483567  250396 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:31:52.483586  250396 kubeadm.go:158] found existing configuration files:
	
	I1122 00:31:52.483623  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:31:52.490766  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:31:52.490823  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:31:52.497480  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:31:52.504492  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:31:52.504532  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:31:52.511332  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:31:52.518517  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:31:52.518562  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:31:52.525346  250396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:31:52.532199  250396 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:31:52.532243  250396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:31:52.538898  250396 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:31:52.593013  250396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:31:52.646878  250396 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:31:51.794137  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:51.794537  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:51.794597  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:51.794642  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:51.822089  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:51.822110  218533 cri.go:89] found id: ""
	I1122 00:31:51.822120  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:51.822178  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.826338  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:51.826389  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:51.852433  218533 cri.go:89] found id: ""
	I1122 00:31:51.852457  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.852466  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:51.852472  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:51.852518  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:51.877216  218533 cri.go:89] found id: ""
	I1122 00:31:51.877239  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.877249  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:51.877255  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:51.877308  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:51.903379  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:51.903399  218533 cri.go:89] found id: ""
	I1122 00:31:51.903409  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:51.903466  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.907316  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:51.907375  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:51.933242  218533 cri.go:89] found id: ""
	I1122 00:31:51.933266  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.933276  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:51.933283  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:51.933340  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:51.958648  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:51.958666  218533 cri.go:89] found id: "fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:51.958672  218533 cri.go:89] found id: ""
	I1122 00:31:51.958681  218533 logs.go:282] 2 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9]
	I1122 00:31:51.958737  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.962259  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:51.965555  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:51.965610  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:51.990254  218533 cri.go:89] found id: ""
	I1122 00:31:51.990273  218533 logs.go:282] 0 containers: []
	W1122 00:31:51.990281  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:51.990287  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:51.990332  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:52.014313  218533 cri.go:89] found id: ""
	I1122 00:31:52.014334  218533 logs.go:282] 0 containers: []
	W1122 00:31:52.014342  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:52.014359  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:52.014371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:52.027669  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:52.027687  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:52.081269  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:52.081286  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:52.081300  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:52.112861  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:52.112885  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:52.165363  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:52.165385  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:52.248168  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:52.248193  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:52.273678  218533 logs.go:123] Gathering logs for kube-controller-manager [fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9] ...
	I1122 00:31:52.273701  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fc0b1100e632feea08739f72906f3988af3242a3f139db7f843c6f20733f87d9"
	I1122 00:31:52.300348  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:52.300371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:52.355540  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:52.355565  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:50.848281  252747 out.go:252] * Restarting existing docker container for "no-preload-983546" ...
	I1122 00:31:50.848356  252747 cli_runner.go:164] Run: docker start no-preload-983546
	I1122 00:31:51.131921  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:51.151622  252747 kic.go:430] container "no-preload-983546" state is running.
	I1122 00:31:51.151958  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:51.171404  252747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/config.json ...
	I1122 00:31:51.171627  252747 machine.go:94] provisionDockerMachine start ...
	I1122 00:31:51.171729  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:51.192252  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:51.192557  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:51.192580  252747 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:31:51.193349  252747 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50394->127.0.0.1:33073: read: connection reset by peer
	I1122 00:31:54.314715  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:31:54.314745  252747 ubuntu.go:182] provisioning hostname "no-preload-983546"
	I1122 00:31:54.314802  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.334974  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.335274  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.335295  252747 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-983546 && echo "no-preload-983546" | sudo tee /etc/hostname
	I1122 00:31:54.465189  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-983546
	
	I1122 00:31:54.465278  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.484420  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.484637  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.484653  252747 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983546/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:31:54.608351  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:31:54.608375  252747 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:31:54.608404  252747 ubuntu.go:190] setting up certificates
	I1122 00:31:54.608413  252747 provision.go:84] configureAuth start
	I1122 00:31:54.608458  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:54.627818  252747 provision.go:143] copyHostCerts
	I1122 00:31:54.627870  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:31:54.627882  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:31:54.627942  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:31:54.628033  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:31:54.628042  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:31:54.628107  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:31:54.628190  252747 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:31:54.628198  252747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:31:54.628230  252747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:31:54.628307  252747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.no-preload-983546 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983546]
	I1122 00:31:54.742310  252747 provision.go:177] copyRemoteCerts
	I1122 00:31:54.742364  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:31:54.742401  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.760782  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:54.854217  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:31:54.872016  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:31:54.889448  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:31:54.906895  252747 provision.go:87] duration metric: took 298.456083ms to configureAuth
	I1122 00:31:54.906922  252747 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:31:54.907146  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:54.907290  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:54.931380  252747 main.go:143] libmachine: Using SSH client type: native
	I1122 00:31:54.931696  252747 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:31:54.931723  252747 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:31:55.260050  252747 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:31:55.260092  252747 machine.go:97] duration metric: took 4.088447626s to provisionDockerMachine
	I1122 00:31:55.260106  252747 start.go:293] postStartSetup for "no-preload-983546" (driver="docker")
	I1122 00:31:55.260120  252747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:31:55.260182  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:31:55.260256  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.281816  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.373431  252747 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:31:55.376810  252747 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:31:55.376843  252747 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:31:55.376855  252747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:31:55.376905  252747 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:31:55.376999  252747 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:31:55.377153  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:31:55.384704  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:55.402924  252747 start.go:296] duration metric: took 142.803451ms for postStartSetup
	I1122 00:31:55.402990  252747 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:31:55.403084  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.424299  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.517831  252747 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:31:55.522329  252747 fix.go:56] duration metric: took 4.696707078s for fixHost
	I1122 00:31:55.522358  252747 start.go:83] releasing machines lock for "no-preload-983546", held for 4.696763245s
	I1122 00:31:55.522429  252747 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983546
	I1122 00:31:55.540303  252747 ssh_runner.go:195] Run: cat /version.json
	I1122 00:31:55.540353  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.540390  252747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:31:55.540446  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:55.560177  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.560516  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:55.696599  252747 ssh_runner.go:195] Run: systemctl --version
	I1122 00:31:55.702926  252747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:31:55.735448  252747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:31:55.739993  252747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:31:55.740069  252747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:31:55.747620  252747 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:31:55.747640  252747 start.go:496] detecting cgroup driver to use...
	I1122 00:31:55.747674  252747 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:31:55.747717  252747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:31:55.761064  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:31:55.772340  252747 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:31:55.772403  252747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:31:55.785492  252747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:31:55.796478  252747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:31:55.874681  252747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:31:55.961366  252747 docker.go:234] disabling docker service ...
	I1122 00:31:55.961432  252747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:31:55.974916  252747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:31:55.986497  252747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:31:56.068892  252747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:31:56.148432  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:31:56.161452  252747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:31:56.176024  252747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:31:56.176100  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.184853  252747 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:31:56.184907  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.193105  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.201194  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.209087  252747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:31:56.216413  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.224446  252747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.232310  252747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:31:56.240372  252747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:31:56.247278  252747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:31:56.254025  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:56.328811  252747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:31:56.460550  252747 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:31:56.460619  252747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:31:56.464996  252747 start.go:564] Will wait 60s for crictl version
	I1122 00:31:56.465083  252747 ssh_runner.go:195] Run: which crictl
	I1122 00:31:56.468598  252747 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:31:56.493086  252747 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:31:56.493164  252747 ssh_runner.go:195] Run: crio --version
	I1122 00:31:56.522723  252747 ssh_runner.go:195] Run: crio --version
	I1122 00:31:56.550862  252747 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:31:54.323974  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:31:56.324607  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:31:56.552289  252747 cli_runner.go:164] Run: docker network inspect no-preload-983546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:31:56.570743  252747 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:31:56.574737  252747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:56.584814  252747 kubeadm.go:884] updating cluster {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:31:56.584908  252747 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:31:56.584937  252747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:31:56.618953  252747 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:31:56.618977  252747 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:31:56.618986  252747 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:31:56.619132  252747 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-983546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:31:56.619210  252747 ssh_runner.go:195] Run: crio config
	I1122 00:31:56.672075  252747 cni.go:84] Creating CNI manager for ""
	I1122 00:31:56.672099  252747 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:31:56.672118  252747 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:31:56.672149  252747 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983546 NodeName:no-preload-983546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:31:56.672287  252747 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-983546"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:31:56.672436  252747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:31:56.683026  252747 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:31:56.683102  252747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:31:56.692749  252747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:31:56.708604  252747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:31:56.722605  252747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1122 00:31:56.738507  252747 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:31:56.743442  252747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:31:56.752609  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:56.843504  252747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:56.874364  252747 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546 for IP: 192.168.76.2
	I1122 00:31:56.874392  252747 certs.go:195] generating shared ca certs ...
	I1122 00:31:56.874414  252747 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:56.874581  252747 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:31:56.874643  252747 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:31:56.874667  252747 certs.go:257] generating profile certs ...
	I1122 00:31:56.874783  252747 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.key
	I1122 00:31:56.874848  252747 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key.c827695f
	I1122 00:31:56.874896  252747 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key
	I1122 00:31:56.875031  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:31:56.875099  252747 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:31:56.875114  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:31:56.875151  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:31:56.875186  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:31:56.875218  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:31:56.875277  252747 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:31:56.876110  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:31:56.899488  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:31:56.923289  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:31:56.945822  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:31:56.976511  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:31:56.998348  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:31:57.020565  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:31:57.041861  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:31:57.058625  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:31:57.079381  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:31:57.100945  252747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:31:57.122401  252747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:31:57.136844  252747 ssh_runner.go:195] Run: openssl version
	I1122 00:31:57.144785  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:31:57.154509  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.157998  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.158043  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:31:57.196911  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:31:57.204953  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:31:57.216330  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.221264  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.221316  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:31:57.256157  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:31:57.263394  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:31:57.272232  252747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.275624  252747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.275665  252747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:31:57.334483  252747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:31:57.344634  252747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:31:57.348789  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:31:57.401421  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:31:57.456304  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:31:57.516102  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:31:57.573323  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:31:57.627564  252747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:31:57.680496  252747 kubeadm.go:401] StartCluster: {Name:no-preload-983546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-983546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:31:57.680614  252747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:31:57.680688  252747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:31:57.713679  252747 cri.go:89] found id: "15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19"
	I1122 00:31:57.713708  252747 cri.go:89] found id: "2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d"
	I1122 00:31:57.713714  252747 cri.go:89] found id: "2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22"
	I1122 00:31:57.713719  252747 cri.go:89] found id: "748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076"
	I1122 00:31:57.713723  252747 cri.go:89] found id: ""
	I1122 00:31:57.713771  252747 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:31:57.730360  252747 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:31:57Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:31:57.730524  252747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:31:57.743981  252747 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:31:57.743998  252747 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:31:57.744102  252747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:31:57.754761  252747 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:31:57.756511  252747 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-983546" does not appear in /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:57.757200  252747 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-9122/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-983546" cluster setting kubeconfig missing "no-preload-983546" context setting]
	I1122 00:31:57.758247  252747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.760139  252747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:31:57.771135  252747 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:31:57.771164  252747 kubeadm.go:602] duration metric: took 27.159505ms to restartPrimaryControlPlane
	I1122 00:31:57.771179  252747 kubeadm.go:403] duration metric: took 90.693509ms to StartCluster
	I1122 00:31:57.771242  252747 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.771303  252747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:31:57.772922  252747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:31:57.773154  252747 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:31:57.773373  252747 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:31:57.773425  252747 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:31:57.773502  252747 addons.go:70] Setting storage-provisioner=true in profile "no-preload-983546"
	I1122 00:31:57.773525  252747 addons.go:239] Setting addon storage-provisioner=true in "no-preload-983546"
	W1122 00:31:57.773533  252747 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:31:57.773559  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.773633  252747 addons.go:70] Setting dashboard=true in profile "no-preload-983546"
	I1122 00:31:57.773665  252747 addons.go:239] Setting addon dashboard=true in "no-preload-983546"
	W1122 00:31:57.773672  252747 addons.go:248] addon dashboard should already be in state true
	I1122 00:31:57.773724  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.774045  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.774162  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.774249  252747 addons.go:70] Setting default-storageclass=true in profile "no-preload-983546"
	I1122 00:31:57.774273  252747 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983546"
	I1122 00:31:57.774573  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.776330  252747 out.go:179] * Verifying Kubernetes components...
	I1122 00:31:57.777645  252747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:31:57.804654  252747 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:31:57.805951  252747 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:31:57.805997  252747 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:31:54.885582  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:54.885967  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:54.886026  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:54.886095  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:54.912955  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:54.912974  218533 cri.go:89] found id: ""
	I1122 00:31:54.912983  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:54.913035  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:54.917400  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:54.917458  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:54.951913  218533 cri.go:89] found id: ""
	I1122 00:31:54.951937  218533 logs.go:282] 0 containers: []
	W1122 00:31:54.951947  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:54.951955  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:54.952009  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:54.982692  218533 cri.go:89] found id: ""
	I1122 00:31:54.982716  218533 logs.go:282] 0 containers: []
	W1122 00:31:54.982728  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:54.982735  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:54.982793  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:55.022244  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:55.022271  218533 cri.go:89] found id: ""
	I1122 00:31:55.022281  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:55.022340  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:55.027065  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:55.027145  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:55.053420  218533 cri.go:89] found id: ""
	I1122 00:31:55.053441  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.053451  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:55.053458  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:55.053519  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:55.084948  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:55.084972  218533 cri.go:89] found id: ""
	I1122 00:31:55.084982  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:31:55.085042  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:55.088797  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:55.088877  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:55.115033  218533 cri.go:89] found id: ""
	I1122 00:31:55.115077  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.115089  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:55.115097  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:55.115149  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:55.142914  218533 cri.go:89] found id: ""
	I1122 00:31:55.142941  218533 logs.go:282] 0 containers: []
	W1122 00:31:55.142952  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:55.142966  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:55.142987  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:55.204133  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:55.204156  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:55.204173  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:55.241169  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:55.241201  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:55.296609  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:55.296636  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:55.323944  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:55.323973  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:55.380399  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:55.380425  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:55.415326  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:55.415353  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:55.511144  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:55.511176  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:58.028115  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:31:58.028583  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:31:58.028634  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:31:58.028682  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:31:58.078096  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:58.078119  218533 cri.go:89] found id: ""
	I1122 00:31:58.078128  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:31:58.078193  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.084665  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:31:58.084829  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:31:58.145151  218533 cri.go:89] found id: ""
	I1122 00:31:58.145177  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.145188  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:31:58.145195  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:31:58.145269  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:31:58.190884  218533 cri.go:89] found id: ""
	I1122 00:31:58.190913  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.190923  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:31:58.190931  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:31:58.190993  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:31:58.245494  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:58.245517  218533 cri.go:89] found id: ""
	I1122 00:31:58.245527  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:31:58.245596  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.251672  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:31:58.251741  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:31:58.295969  218533 cri.go:89] found id: ""
	I1122 00:31:58.295989  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.295999  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:31:58.296006  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:31:58.296070  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:31:58.351204  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:58.351228  218533 cri.go:89] found id: ""
	I1122 00:31:58.351238  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:31:58.351307  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:31:58.356276  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:31:58.356339  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:31:58.409482  218533 cri.go:89] found id: ""
	I1122 00:31:58.409506  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.409517  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:31:58.409524  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:31:58.409576  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:31:58.449491  218533 cri.go:89] found id: ""
	I1122 00:31:58.449635  218533 logs.go:282] 0 containers: []
	W1122 00:31:58.449684  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:31:58.449729  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:31:58.449752  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:31:58.481719  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:31:58.481744  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:31:58.570885  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:31:58.570908  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:31:58.570923  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:31:58.620101  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:31:58.620130  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:31:58.702983  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:31:58.703015  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:31:58.740116  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:31:58.740145  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:31:58.837747  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:31:58.837780  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:31:58.880395  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:31:58.880426  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:31:57.806973  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:31:57.807000  252747 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:31:57.807042  252747 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:57.807081  252747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:31:57.807083  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.807130  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.817030  252747 addons.go:239] Setting addon default-storageclass=true in "no-preload-983546"
	W1122 00:31:57.817107  252747 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:31:57.817143  252747 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:31:57.817705  252747 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:31:57.854339  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:57.857949  252747 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:57.857968  252747 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:31:57.858023  252747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:31:57.868177  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:57.892678  252747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:31:58.001275  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:31:58.014835  252747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:31:58.029324  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:31:58.029347  252747 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:31:58.077202  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:31:58.077239  252747 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:31:58.084967  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:31:58.124114  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:31:58.124657  252747 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:31:58.182865  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:31:58.182887  252747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:31:58.204969  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:31:58.204995  252747 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:31:58.239980  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:31:58.240013  252747 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:31:58.266916  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:31:58.266941  252747 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:31:58.288959  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:31:58.288983  252747 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:31:58.313673  252747 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:31:58.313699  252747 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:31:58.336117  252747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:32:01.159349  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.158028615s)
	I1122 00:32:01.159417  252747 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.144545295s)
	I1122 00:32:01.159486  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.074497604s)
	I1122 00:32:01.159500  252747 node_ready.go:35] waiting up to 6m0s for node "no-preload-983546" to be "Ready" ...
	I1122 00:32:01.159583  252747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.823428245s)
	I1122 00:32:01.161277  252747 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-983546 addons enable metrics-server
	
	I1122 00:32:01.167926  252747 node_ready.go:49] node "no-preload-983546" is "Ready"
	I1122 00:32:01.167949  252747 node_ready.go:38] duration metric: took 8.413326ms for node "no-preload-983546" to be "Ready" ...
	I1122 00:32:01.167962  252747 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:32:01.168005  252747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:32:01.172509  252747 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1122 00:31:58.335809  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:00.826805  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:03.740361  250396 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:32:03.740437  250396 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:32:03.740585  250396 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:32:03.740671  250396 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:32:03.740718  250396 kubeadm.go:319] OS: Linux
	I1122 00:32:03.740799  250396 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:32:03.740880  250396 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:32:03.740956  250396 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:32:03.741043  250396 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:32:03.741155  250396 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:32:03.741220  250396 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:32:03.741303  250396 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:32:03.741381  250396 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:32:03.741480  250396 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:32:03.741631  250396 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:32:03.741771  250396 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:32:03.741860  250396 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:32:03.743928  250396 out.go:252]   - Generating certificates and keys ...
	I1122 00:32:03.743995  250396 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:32:03.744120  250396 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:32:03.744232  250396 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:32:03.744291  250396 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:32:03.744352  250396 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:32:03.744395  250396 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:32:03.744472  250396 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:32:03.744645  250396 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-084979 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:32:03.744704  250396 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:32:03.744808  250396 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-084979 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:32:03.744871  250396 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:32:03.744932  250396 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:32:03.744972  250396 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:32:03.745021  250396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:32:03.745115  250396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:32:03.745180  250396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:32:03.745226  250396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:32:03.745300  250396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:32:03.745349  250396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:32:03.745423  250396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:32:03.745504  250396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:32:03.746554  250396 out.go:252]   - Booting up control plane ...
	I1122 00:32:03.746645  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:32:03.746736  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:32:03.746794  250396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:32:03.746895  250396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:32:03.746980  250396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:32:03.747098  250396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:32:03.747186  250396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:32:03.747220  250396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:32:03.747342  250396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:32:03.747434  250396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:32:03.747488  250396 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.911364ms
	I1122 00:32:03.747570  250396 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:32:03.747646  250396 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1122 00:32:03.747724  250396 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:32:03.747825  250396 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:32:03.747922  250396 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.665422295s
	I1122 00:32:03.747995  250396 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.549547667s
	I1122 00:32:03.748062  250396 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001391054s
	I1122 00:32:03.748174  250396 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:32:03.748301  250396 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:32:03.748362  250396 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:32:03.748554  250396 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-084979 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:32:03.748608  250396 kubeadm.go:319] [bootstrap-token] Using token: etvckh.upaww25zovv37fkt
	I1122 00:32:03.749809  250396 out.go:252]   - Configuring RBAC rules ...
	I1122 00:32:03.749920  250396 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:32:03.750019  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:32:03.750181  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:32:03.750395  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:32:03.750532  250396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:32:03.750663  250396 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:32:03.750828  250396 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:32:03.750890  250396 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:32:03.750959  250396 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:32:03.750968  250396 kubeadm.go:319] 
	I1122 00:32:03.751069  250396 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:32:03.751084  250396 kubeadm.go:319] 
	I1122 00:32:03.751199  250396 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:32:03.751211  250396 kubeadm.go:319] 
	I1122 00:32:03.751253  250396 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:32:03.751323  250396 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:32:03.751372  250396 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:32:03.751382  250396 kubeadm.go:319] 
	I1122 00:32:03.751448  250396 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:32:03.751455  250396 kubeadm.go:319] 
	I1122 00:32:03.751526  250396 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:32:03.751537  250396 kubeadm.go:319] 
	I1122 00:32:03.751617  250396 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:32:03.751731  250396 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:32:03.751814  250396 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:32:03.751823  250396 kubeadm.go:319] 
	I1122 00:32:03.751938  250396 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:32:03.752072  250396 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:32:03.752083  250396 kubeadm.go:319] 
	I1122 00:32:03.752202  250396 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token etvckh.upaww25zovv37fkt \
	I1122 00:32:03.752361  250396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:32:03.752397  250396 kubeadm.go:319] 	--control-plane 
	I1122 00:32:03.752409  250396 kubeadm.go:319] 
	I1122 00:32:03.752486  250396 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:32:03.752493  250396 kubeadm.go:319] 
	I1122 00:32:03.752566  250396 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token etvckh.upaww25zovv37fkt \
	I1122 00:32:03.752674  250396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:32:03.752687  250396 cni.go:84] Creating CNI manager for ""
	I1122 00:32:03.752693  250396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:03.753815  250396 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:32:01.530155  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:01.530629  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:01.530691  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:01.530751  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:01.573529  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:01.573606  218533 cri.go:89] found id: ""
	I1122 00:32:01.573630  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:01.573718  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.579772  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:01.579884  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:01.628480  218533 cri.go:89] found id: ""
	I1122 00:32:01.628508  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.628520  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:01.628527  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:01.628581  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:01.669551  218533 cri.go:89] found id: ""
	I1122 00:32:01.669590  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.669602  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:01.669610  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:01.669675  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:01.709664  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:01.709730  218533 cri.go:89] found id: ""
	I1122 00:32:01.709744  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:01.709807  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.716273  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:01.716338  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:01.757836  218533 cri.go:89] found id: ""
	I1122 00:32:01.757865  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.757877  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:01.757889  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:01.757948  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:01.807272  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:01.807295  218533 cri.go:89] found id: ""
	I1122 00:32:01.807306  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:01.807366  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:01.812630  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:01.812696  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:01.850551  218533 cri.go:89] found id: ""
	I1122 00:32:01.850589  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.850601  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:01.850609  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:01.850667  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:01.888142  218533 cri.go:89] found id: ""
	I1122 00:32:01.888172  218533 logs.go:282] 0 containers: []
	W1122 00:32:01.888184  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:01.888196  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:01.888211  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:02.026356  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:02.026396  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:02.046765  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:02.046810  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:02.128377  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:02.128401  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:02.128416  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:02.170600  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:02.170631  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:02.242991  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:02.243019  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:02.278136  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:02.278167  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:02.353550  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:02.353590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:01.173529  252747 addons.go:530] duration metric: took 3.40010682s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:32:01.181038  252747 api_server.go:72] duration metric: took 3.407850159s to wait for apiserver process to appear ...
	I1122 00:32:01.181069  252747 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:32:01.181088  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:01.185781  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:32:01.185823  252747 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:32:01.681192  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:01.687851  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:32:01.687879  252747 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:32:02.181208  252747 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:32:02.186915  252747 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:32:02.188326  252747 api_server.go:141] control plane version: v1.34.1
	I1122 00:32:02.188355  252747 api_server.go:131] duration metric: took 1.007277312s to wait for apiserver health ...
	I1122 00:32:02.188367  252747 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:32:02.192329  252747 system_pods.go:59] 8 kube-system pods found
	I1122 00:32:02.192365  252747 system_pods.go:61] "coredns-66bc5c9577-4psr2" [92a4504e-35be-4d9d-86ae-a574cc38590b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:32:02.192378  252747 system_pods.go:61] "etcd-no-preload-983546" [0da66ff3-f7cb-447e-b079-8f17012f75ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:32:02.192385  252747 system_pods.go:61] "kindnet-rpr2g" [59f42291-1016-4584-9fdb-5df09910070b] Running
	I1122 00:32:02.192399  252747 system_pods.go:61] "kube-apiserver-no-preload-983546" [e14c6fe3-b764-4f17-8f05-302c8ea76d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:32:02.192408  252747 system_pods.go:61] "kube-controller-manager-no-preload-983546" [5d5e6efd-fb84-4468-8672-2a926e4faa74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:32:02.192415  252747 system_pods.go:61] "kube-proxy-gnlfp" [0b842766-a9da-46e8-9259-f0cdca13c349] Running
	I1122 00:32:02.192425  252747 system_pods.go:61] "kube-scheduler-no-preload-983546" [7c10144e-6965-47c1-8047-1d6b81059de7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:32:02.192431  252747 system_pods.go:61] "storage-provisioner" [a6c69c5d-deb0-4c04-af56-6a7a594505ca] Running
	I1122 00:32:02.192441  252747 system_pods.go:74] duration metric: took 4.06574ms to wait for pod list to return data ...
	I1122 00:32:02.192449  252747 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:32:02.195085  252747 default_sa.go:45] found service account: "default"
	I1122 00:32:02.195106  252747 default_sa.go:55] duration metric: took 2.651035ms for default service account to be created ...
	I1122 00:32:02.195116  252747 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:32:02.198180  252747 system_pods.go:86] 8 kube-system pods found
	I1122 00:32:02.198203  252747 system_pods.go:89] "coredns-66bc5c9577-4psr2" [92a4504e-35be-4d9d-86ae-a574cc38590b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:32:02.198210  252747 system_pods.go:89] "etcd-no-preload-983546" [0da66ff3-f7cb-447e-b079-8f17012f75ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:32:02.198216  252747 system_pods.go:89] "kindnet-rpr2g" [59f42291-1016-4584-9fdb-5df09910070b] Running
	I1122 00:32:02.198224  252747 system_pods.go:89] "kube-apiserver-no-preload-983546" [e14c6fe3-b764-4f17-8f05-302c8ea76d4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:32:02.198231  252747 system_pods.go:89] "kube-controller-manager-no-preload-983546" [5d5e6efd-fb84-4468-8672-2a926e4faa74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:32:02.198237  252747 system_pods.go:89] "kube-proxy-gnlfp" [0b842766-a9da-46e8-9259-f0cdca13c349] Running
	I1122 00:32:02.198245  252747 system_pods.go:89] "kube-scheduler-no-preload-983546" [7c10144e-6965-47c1-8047-1d6b81059de7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:32:02.198250  252747 system_pods.go:89] "storage-provisioner" [a6c69c5d-deb0-4c04-af56-6a7a594505ca] Running
	I1122 00:32:02.198267  252747 system_pods.go:126] duration metric: took 3.142921ms to wait for k8s-apps to be running ...
	I1122 00:32:02.198274  252747 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:32:02.198324  252747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:02.215177  252747 system_svc.go:56] duration metric: took 16.894051ms WaitForService to wait for kubelet
	I1122 00:32:02.215202  252747 kubeadm.go:587] duration metric: took 4.442016177s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:32:02.215222  252747 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:32:02.218392  252747 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:32:02.218421  252747 node_conditions.go:123] node cpu capacity is 8
	I1122 00:32:02.218440  252747 node_conditions.go:105] duration metric: took 3.212244ms to run NodePressure ...
	I1122 00:32:02.218457  252747 start.go:242] waiting for startup goroutines ...
	I1122 00:32:02.218471  252747 start.go:247] waiting for cluster config update ...
	I1122 00:32:02.218486  252747 start.go:256] writing updated cluster config ...
	I1122 00:32:02.218798  252747 ssh_runner.go:195] Run: rm -f paused
	I1122 00:32:02.223404  252747 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:32:02.226904  252747 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4psr2" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:32:04.232009  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:03.324593  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:05.325173  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:07.327505  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:03.754800  250396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:32:03.758923  250396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:32:03.758939  250396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:32:03.772321  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:32:03.995884  250396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:32:03.995992  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-084979 minikube.k8s.io/updated_at=2025_11_22T00_32_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=embed-certs-084979 minikube.k8s.io/primary=true
	I1122 00:32:03.995993  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:04.071731  250396 ops.go:34] apiserver oom_adj: -16
	I1122 00:32:04.071877  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:04.572611  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:05.072035  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:05.572343  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:06.071986  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:06.572988  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:07.072771  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:07.572366  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.072176  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.572186  250396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:32:08.658591  250396 kubeadm.go:1114] duration metric: took 4.662653726s to wait for elevateKubeSystemPrivileges
	I1122 00:32:08.658637  250396 kubeadm.go:403] duration metric: took 16.222848793s to StartCluster
	I1122 00:32:08.658668  250396 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:08.658754  250396 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:32:08.661097  250396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:08.661390  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:32:08.661413  250396 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:32:08.661465  250396 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:32:08.661577  250396 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-084979"
	I1122 00:32:08.661605  250396 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-084979"
	I1122 00:32:08.661636  250396 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:32:08.661630  250396 addons.go:70] Setting default-storageclass=true in profile "embed-certs-084979"
	I1122 00:32:08.661655  250396 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:08.661676  250396 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-084979"
	I1122 00:32:08.662134  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.662261  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.662773  250396 out.go:179] * Verifying Kubernetes components...
	I1122 00:32:08.665750  250396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:32:08.690424  250396 addons.go:239] Setting addon default-storageclass=true in "embed-certs-084979"
	I1122 00:32:08.690491  250396 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:32:08.690870  250396 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:32:08.691123  250396 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:32:08.692185  250396 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:32:08.692207  250396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:32:08.692258  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:32:08.720362  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:32:08.728927  250396 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:32:08.728956  250396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:32:08.729017  250396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:32:08.756027  250396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:32:08.777256  250396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:32:08.844110  250396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:32:08.845483  250396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:32:08.888146  250396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:32:08.991136  250396 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1122 00:32:08.994638  250396 node_ready.go:35] waiting up to 6m0s for node "embed-certs-084979" to be "Ready" ...
	I1122 00:32:09.263856  250396 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:32:04.897223  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:04.898234  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:04.898296  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:04.898366  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:04.937190  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:04.937217  218533 cri.go:89] found id: ""
	I1122 00:32:04.937228  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:04.937289  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:04.942473  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:04.942610  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:04.979203  218533 cri.go:89] found id: ""
	I1122 00:32:04.979231  218533 logs.go:282] 0 containers: []
	W1122 00:32:04.979242  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:04.979250  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:04.979312  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:05.016272  218533 cri.go:89] found id: ""
	I1122 00:32:05.016303  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.016315  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:05.016322  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:05.016381  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:05.052256  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:05.052288  218533 cri.go:89] found id: ""
	I1122 00:32:05.052299  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:05.052357  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:05.057464  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:05.057546  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:05.092269  218533 cri.go:89] found id: ""
	I1122 00:32:05.092294  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.092304  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:05.092312  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:05.092378  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:05.129968  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:05.129992  218533 cri.go:89] found id: ""
	I1122 00:32:05.130003  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:05.130087  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:05.135490  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:05.135553  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:05.170412  218533 cri.go:89] found id: ""
	I1122 00:32:05.170439  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.170450  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:05.170458  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:05.170518  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:05.209012  218533 cri.go:89] found id: ""
	I1122 00:32:05.209040  218533 logs.go:282] 0 containers: []
	W1122 00:32:05.209075  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:05.209089  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:05.209104  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:05.252894  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:05.252929  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:05.321965  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:05.322005  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:05.356581  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:05.356616  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:05.443942  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:05.443983  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:05.482953  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:05.482985  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:05.621438  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:05.621484  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:05.640769  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:05.640808  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:05.714531  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:08.215942  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:08.216453  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:08.216518  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:08.216584  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:08.252920  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:08.252957  218533 cri.go:89] found id: ""
	I1122 00:32:08.252969  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:08.253034  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.258264  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:08.258331  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:08.292110  218533 cri.go:89] found id: ""
	I1122 00:32:08.292133  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.292146  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:08.292154  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:08.292213  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:08.325112  218533 cri.go:89] found id: ""
	I1122 00:32:08.325138  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.325149  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:08.325157  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:08.325214  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:08.357137  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:08.357162  218533 cri.go:89] found id: ""
	I1122 00:32:08.357174  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:08.357230  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.361362  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:08.361418  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:08.395728  218533 cri.go:89] found id: ""
	I1122 00:32:08.395759  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.395770  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:08.395778  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:08.395840  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:08.427682  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:08.427707  218533 cri.go:89] found id: ""
	I1122 00:32:08.427718  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:08.427777  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:08.432425  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:08.432487  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:08.463456  218533 cri.go:89] found id: ""
	I1122 00:32:08.463484  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.463494  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:08.463503  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:08.463565  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:08.498532  218533 cri.go:89] found id: ""
	I1122 00:32:08.498561  218533 logs.go:282] 0 containers: []
	W1122 00:32:08.498578  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:08.498591  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:08.498611  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:08.538389  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:08.538421  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:08.611006  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:08.611060  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:08.643475  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:08.643510  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:08.747150  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:08.747238  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:08.795942  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:08.795979  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:08.931307  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:08.931347  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:08.946935  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:08.946965  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:09.033697  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1122 00:32:06.232766  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:08.233839  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:09.825147  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	W1122 00:32:12.322914  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:09.264969  250396 addons.go:530] duration metric: took 603.501615ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:32:09.497069  250396 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-084979" context rescaled to 1 replicas
	W1122 00:32:10.997553  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:11.535789  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:11.536195  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:11.536260  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:11.536317  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:11.564019  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:11.564042  218533 cri.go:89] found id: ""
	I1122 00:32:11.564085  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:11.564144  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.567867  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:11.567933  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:11.592888  218533 cri.go:89] found id: ""
	I1122 00:32:11.592910  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.592919  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:11.592926  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:11.592977  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:11.615551  218533 cri.go:89] found id: ""
	I1122 00:32:11.615573  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.615583  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:11.615590  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:11.615646  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:11.640041  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:11.640075  218533 cri.go:89] found id: ""
	I1122 00:32:11.640084  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:11.640127  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.643842  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:11.643888  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:11.667737  218533 cri.go:89] found id: ""
	I1122 00:32:11.667760  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.667769  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:11.667777  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:11.667829  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:11.692206  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:11.692227  218533 cri.go:89] found id: ""
	I1122 00:32:11.692236  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:11.692288  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:11.695688  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:11.695734  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:11.719309  218533 cri.go:89] found id: ""
	I1122 00:32:11.719330  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.719336  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:11.719341  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:11.719382  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:11.743535  218533 cri.go:89] found id: ""
	I1122 00:32:11.743558  218533 logs.go:282] 0 containers: []
	W1122 00:32:11.743567  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:11.743577  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:11.743590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:11.798421  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:11.798443  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:11.798458  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:11.833336  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:11.833363  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:11.883020  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:11.883047  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:11.906415  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:11.906436  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:11.961581  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:11.961605  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:11.990349  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:11.990371  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:12.073562  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:12.073590  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:10.731437  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:12.732122  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:14.732512  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:14.322986  246023 pod_ready.go:104] pod "coredns-5dd5756b68-lwzsc" is not "Ready", error: <nil>
	I1122 00:32:14.824010  246023 pod_ready.go:94] pod "coredns-5dd5756b68-lwzsc" is "Ready"
	I1122 00:32:14.824037  246023 pod_ready.go:86] duration metric: took 31.505715835s for pod "coredns-5dd5756b68-lwzsc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.826639  246023 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.830991  246023 pod_ready.go:94] pod "etcd-old-k8s-version-377321" is "Ready"
	I1122 00:32:14.831015  246023 pod_ready.go:86] duration metric: took 4.355984ms for pod "etcd-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.833840  246023 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.837703  246023 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-377321" is "Ready"
	I1122 00:32:14.837724  246023 pod_ready.go:86] duration metric: took 3.863315ms for pod "kube-apiserver-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:14.840603  246023 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.022440  246023 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-377321" is "Ready"
	I1122 00:32:15.022464  246023 pod_ready.go:86] duration metric: took 181.838073ms for pod "kube-controller-manager-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.222995  246023 pod_ready.go:83] waiting for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.622529  246023 pod_ready.go:94] pod "kube-proxy-pz8cc" is "Ready"
	I1122 00:32:15.622552  246023 pod_ready.go:86] duration metric: took 399.533017ms for pod "kube-proxy-pz8cc" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:15.822978  246023 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:16.222462  246023 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-377321" is "Ready"
	I1122 00:32:16.222487  246023 pod_ready.go:86] duration metric: took 399.487283ms for pod "kube-scheduler-old-k8s-version-377321" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:32:16.222504  246023 pod_ready.go:40] duration metric: took 32.908075029s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:32:16.265046  246023 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:32:16.266720  246023 out.go:203] 
	W1122 00:32:16.267945  246023 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:32:16.269101  246023 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:32:16.270254  246023 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-377321" cluster and "default" namespace by default
	W1122 00:32:13.497333  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:15.497785  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:17.997471  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:14.587256  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:14.587644  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:14.587699  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:14.587755  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:14.614678  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:14.614701  218533 cri.go:89] found id: ""
	I1122 00:32:14.614711  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:14.614768  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.618481  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:14.618536  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:14.643735  218533 cri.go:89] found id: ""
	I1122 00:32:14.643757  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.643766  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:14.643773  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:14.643822  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:14.669121  218533 cri.go:89] found id: ""
	I1122 00:32:14.669145  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.669155  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:14.669162  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:14.669221  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:14.694038  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:14.694085  218533 cri.go:89] found id: ""
	I1122 00:32:14.694095  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:14.694153  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.697687  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:14.697733  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:14.722140  218533 cri.go:89] found id: ""
	I1122 00:32:14.722159  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.722166  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:14.722171  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:14.722219  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:14.750643  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:14.750662  218533 cri.go:89] found id: ""
	I1122 00:32:14.750670  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:14.750718  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:14.754450  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:14.754501  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:14.780094  218533 cri.go:89] found id: ""
	I1122 00:32:14.780118  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.780127  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:14.780135  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:14.780191  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:14.806138  218533 cri.go:89] found id: ""
	I1122 00:32:14.806162  218533 logs.go:282] 0 containers: []
	W1122 00:32:14.806174  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:14.806187  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:14.806203  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:14.819748  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:14.819774  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:14.876798  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:14.876833  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:14.876852  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:14.909027  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:14.909062  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:14.960970  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:14.960994  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:14.986818  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:14.986846  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:15.043330  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:15.043354  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:15.071710  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:15.071762  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:17.659860  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:17.660228  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:17.660291  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:17.660342  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:17.687605  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:17.687626  218533 cri.go:89] found id: ""
	I1122 00:32:17.687634  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:17.687679  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.691281  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:17.691334  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:17.715534  218533 cri.go:89] found id: ""
	I1122 00:32:17.715555  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.715560  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:17.715565  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:17.715604  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:17.740688  218533 cri.go:89] found id: ""
	I1122 00:32:17.740708  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.740717  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:17.740724  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:17.740771  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:17.765719  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:17.765743  218533 cri.go:89] found id: ""
	I1122 00:32:17.765753  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:17.765799  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.769489  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:17.769548  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:17.793906  218533 cri.go:89] found id: ""
	I1122 00:32:17.793929  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.793937  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:17.793944  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:17.794008  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:17.818834  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:17.818854  218533 cri.go:89] found id: ""
	I1122 00:32:17.818863  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:17.818917  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:17.822475  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:17.822530  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:17.847077  218533 cri.go:89] found id: ""
	I1122 00:32:17.847103  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.847113  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:17.847137  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:17.847186  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:17.873154  218533 cri.go:89] found id: ""
	I1122 00:32:17.873188  218533 logs.go:282] 0 containers: []
	W1122 00:32:17.873199  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:17.873210  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:17.873222  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:17.928354  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:17.928378  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:17.928394  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:17.961215  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:17.961243  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:18.014092  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:18.014127  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:18.040632  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:18.040657  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:18.098144  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:18.098173  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:18.127668  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:18.127699  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:18.212045  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:18.212085  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:17.231619  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:19.731809  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:20.498174  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:22.997834  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:20.725824  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:20.726222  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:20.726275  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:20.726331  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:20.754921  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:20.754940  218533 cri.go:89] found id: ""
	I1122 00:32:20.754949  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:20.754995  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.758832  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:20.758879  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:20.783771  218533 cri.go:89] found id: ""
	I1122 00:32:20.783790  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.783797  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:20.783803  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:20.783856  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:20.810449  218533 cri.go:89] found id: ""
	I1122 00:32:20.810472  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.810480  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:20.810486  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:20.810543  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:20.837159  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:20.837181  218533 cri.go:89] found id: ""
	I1122 00:32:20.837190  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:20.837238  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.840845  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:20.840905  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:20.865439  218533 cri.go:89] found id: ""
	I1122 00:32:20.865467  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.865475  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:20.865481  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:20.865541  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:20.891345  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:20.891369  218533 cri.go:89] found id: ""
	I1122 00:32:20.891377  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:20.891418  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:20.895001  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:20.895104  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:20.921028  218533 cri.go:89] found id: ""
	I1122 00:32:20.921066  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.921076  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:20.921084  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:20.921137  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:20.947527  218533 cri.go:89] found id: ""
	I1122 00:32:20.947552  218533 logs.go:282] 0 containers: []
	W1122 00:32:20.947562  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:20.947579  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:20.947593  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:21.043118  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:21.043149  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:21.058034  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:21.058111  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:21.116544  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:21.116566  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:21.116578  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:21.147804  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:21.147832  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:21.199577  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:21.199605  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:21.225224  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:21.225255  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:21.281329  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:21.281354  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:23.810365  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:23.810717  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:23.810772  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:23.810818  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:23.837384  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:23.837407  218533 cri.go:89] found id: ""
	I1122 00:32:23.837417  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:23.837466  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.841228  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:23.841300  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:23.865463  218533 cri.go:89] found id: ""
	I1122 00:32:23.865483  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.865490  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:23.865496  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:23.865538  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:23.891829  218533 cri.go:89] found id: ""
	I1122 00:32:23.891849  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.891856  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:23.891865  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:23.891924  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:23.917195  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:23.917220  218533 cri.go:89] found id: ""
	I1122 00:32:23.917231  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:23.917275  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.920785  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:23.920844  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:23.944914  218533 cri.go:89] found id: ""
	I1122 00:32:23.944936  218533 logs.go:282] 0 containers: []
	W1122 00:32:23.944945  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:23.944951  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:23.944993  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:23.972047  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:23.972093  218533 cri.go:89] found id: ""
	I1122 00:32:23.972101  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:23.972143  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:23.975663  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:23.975714  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:24.000811  218533 cri.go:89] found id: ""
	I1122 00:32:24.000830  218533 logs.go:282] 0 containers: []
	W1122 00:32:24.000837  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:24.000843  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:24.000888  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:24.025467  218533 cri.go:89] found id: ""
	I1122 00:32:24.025484  218533 logs.go:282] 0 containers: []
	W1122 00:32:24.025491  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:24.025499  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:24.025510  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:24.077907  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:24.077926  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:24.077938  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:24.109386  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:24.109411  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:24.157948  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:24.157980  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:24.183206  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:24.183234  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:24.236823  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:24.236845  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:24.265620  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:24.265641  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:24.355847  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:24.355869  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1122 00:32:22.231718  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:24.231821  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:25.497582  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:27.997797  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:26.870191  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:26.870569  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:26.870618  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:26.870668  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:26.897294  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:26.897316  218533 cri.go:89] found id: ""
	I1122 00:32:26.897332  218533 logs.go:282] 1 containers: [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:26.897379  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:26.901169  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:26.901224  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:26.925844  218533 cri.go:89] found id: ""
	I1122 00:32:26.925867  218533 logs.go:282] 0 containers: []
	W1122 00:32:26.925877  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:26.925885  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:26.925940  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:26.950625  218533 cri.go:89] found id: ""
	I1122 00:32:26.950650  218533 logs.go:282] 0 containers: []
	W1122 00:32:26.950660  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:26.950668  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:26.950712  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:26.976232  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:26.976252  218533 cri.go:89] found id: ""
	I1122 00:32:26.976261  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:26.976309  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:26.980027  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:26.980097  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:27.006262  218533 cri.go:89] found id: ""
	I1122 00:32:27.006287  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.006297  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:27.006305  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:27.006355  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:27.031280  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:27.031301  218533 cri.go:89] found id: ""
	I1122 00:32:27.031308  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:27.031356  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:27.034880  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:27.034936  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:27.059733  218533 cri.go:89] found id: ""
	I1122 00:32:27.059750  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.059756  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:27.059762  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:27.059813  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:27.084321  218533 cri.go:89] found id: ""
	I1122 00:32:27.084353  218533 logs.go:282] 0 containers: []
	W1122 00:32:27.084362  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:27.084373  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:27.084391  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:27.136326  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:27.136349  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:27.164195  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:27.164223  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:27.246634  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:27.246659  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:27.260429  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:27.260454  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:27.315384  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:27.315403  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:27.315416  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:27.348407  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:27.348429  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:27.399816  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:27.399841  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	W1122 00:32:26.232505  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	W1122 00:32:28.232662  252747 pod_ready.go:104] pod "coredns-66bc5c9577-4psr2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:32:03 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:03.937956137Z" level=info msg="Started container" PID=1748 containerID=2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper id=1bbb047a-fde6-4a35-be56-7d908fc95c82 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6feca78516ad50e4fe97b3f97bada918800401550ffa3af28e6adfb968d1c990
	Nov 22 00:32:04 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:04.901781164Z" level=info msg="Removing container: cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817" id=512ef12d-4f13-40d9-9675-0c02c8ade803 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:04 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:04.917023346Z" level=info msg="Removed container cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=512ef12d-4f13-40d9-9675-0c02c8ade803 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.920959768Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=74ddf206-f1c0-4267-86aa-16ec98b17296 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.921884831Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c7ab721a-703f-4b70-b934-d302f35996a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.922866705Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6efebf0d-977b-4b97-9cb9-b5a76a6f4b49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.922970221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.926845936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.92698979Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4ae7b46de1f51678fa2b4142f4cab6c3b8cb8118eaf7ad88f9d72617d63b3070/merged/etc/passwd: no such file or directory"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.927012386Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4ae7b46de1f51678fa2b4142f4cab6c3b8cb8118eaf7ad88f9d72617d63b3070/merged/etc/group: no such file or directory"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.927286304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.956427253Z" level=info msg="Created container 93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975: kube-system/storage-provisioner/storage-provisioner" id=6efebf0d-977b-4b97-9cb9-b5a76a6f4b49 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.956985812Z" level=info msg="Starting container: 93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975" id=e32c6915-22e0-4597-b5fd-572efba84f3d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:13 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:13.95864215Z" level=info msg="Started container" PID=1766 containerID=93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975 description=kube-system/storage-provisioner/storage-provisioner id=e32c6915-22e0-4597-b5fd-572efba84f3d name=/runtime.v1.RuntimeService/StartContainer sandboxID=8da5c1c27d10fbc44e9019ce3c31b6daa2edfea930125bffb844fd602aab24d2
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.797532833Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52b1f0ca-3bd7-43ee-9cd7-d494f4e8a14e name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.798546028Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2a74c662-ef34-4ff8-921c-6f1663b1ab96 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.799519829Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=8caaeada-cdf1-495e-bb8a-486b0a00325d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.799630983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.804980957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.805482957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.831622745Z" level=info msg="Created container 1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=8caaeada-cdf1-495e-bb8a-486b0a00325d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.832175097Z" level=info msg="Starting container: 1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d" id=f8258c47-fb18-428e-9777-1a65ae5ffea0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.8338897Z" level=info msg="Started container" PID=1804 containerID=1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper id=f8258c47-fb18-428e-9777-1a65ae5ffea0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6feca78516ad50e4fe97b3f97bada918800401550ffa3af28e6adfb968d1c990
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.941797858Z" level=info msg="Removing container: 2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb" id=d94e18e6-594f-460b-a789-34591e73ee8e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:21 old-k8s-version-377321 crio[566]: time="2025-11-22T00:32:21.953486809Z" level=info msg="Removed container 2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq/dashboard-metrics-scraper" id=d94e18e6-594f-460b-a789-34591e73ee8e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1159f8806d56e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   2                   6feca78516ad5       dashboard-metrics-scraper-5f989dc9cf-mj7xq       kubernetes-dashboard
	93bfe3b02b302       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   8da5c1c27d10f       storage-provisioner                              kube-system
	d3398b58126a8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   4523b7bd40065       kubernetes-dashboard-8694d4445c-8fvls            kubernetes-dashboard
	801c8d5d08f56       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   95d1eb32ca6c8       coredns-5dd5756b68-lwzsc                         kube-system
	dd56e9f3efdf1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   d1a817c1ea701       busybox                                          default
	6a1f00984a7df       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   6273ec66b8807       kube-proxy-pz8cc                                 kube-system
	570f113a27a51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   c932b507c6e5c       kindnet-f996p                                    kube-system
	6fd900059ec31       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   8da5c1c27d10f       storage-provisioner                              kube-system
	0c7b31cf741c7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   af7a1f88de16b       etcd-old-k8s-version-377321                      kube-system
	5819251d36741       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   b9646e1892734       kube-apiserver-old-k8s-version-377321            kube-system
	ed98561b5f5ab       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   2b81970a3e077       kube-controller-manager-old-k8s-version-377321   kube-system
	ab6a019fd3f49       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   ff76ff91af545       kube-scheduler-old-k8s-version-377321            kube-system
	
	
	==> coredns [801c8d5d08f560e17fd4023d35002a9afed8af82fe042078f52484439238fd06] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54212 - 636 "HINFO IN 2872590925639124345.4534167897300206030. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085809203s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-377321
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-377321
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-377321
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_30_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:30:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-377321
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:30:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:32:12 +0000   Sat, 22 Nov 2025 00:31:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-377321
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                6461bf81-9141-4b24-bd64-39ea1ba5c316
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-lwzsc                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-old-k8s-version-377321                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-f996p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-377321             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-377321    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-pz8cc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-377321             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-mj7xq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8fvls             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-377321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           103s               node-controller  Node old-k8s-version-377321 event: Registered Node old-k8s-version-377321 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-377321 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-377321 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-377321 event: Registered Node old-k8s-version-377321 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [0c7b31cf741c7a5491efff25f26daaf7e50f1b38c7b0275cb2a437a4babfc650] <==
	{"level":"info","ts":"2025-11-22T00:31:39.369441Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:31:39.369471Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:31:39.369889Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:31:39.369505Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:31:40.856898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:31:40.856986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.856994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.857003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.857009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:31:40.858077Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-377321 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:31:40.858105Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:31:40.85809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:31:40.858271Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:31:40.858321Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:31:40.859461Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:31:40.859463Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-22T00:31:47.090279Z","caller":"traceutil/trace.go:171","msg":"trace[1422261589] transaction","detail":"{read_only:false; response_revision:458; number_of_response:1; }","duration":"130.380418ms","start":"2025-11-22T00:31:46.95988Z","end":"2025-11-22T00:31:47.090261Z","steps":["trace[1422261589] 'process raft request'  (duration: 124.483475ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:31:47.345805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.364151ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221089893766 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" value_size:658 lease:499225184235117955 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:31:47.345921Z","caller":"traceutil/trace.go:171","msg":"trace[1590494862] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"252.994334ms","start":"2025-11-22T00:31:47.092905Z","end":"2025-11-22T00:31:47.345899Z","steps":["trace[1590494862] 'process raft request'  (duration: 122.270217ms)","trace[1590494862] 'compare'  (duration: 130.274261ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:47.614141Z","caller":"traceutil/trace.go:171","msg":"trace[1548252938] transaction","detail":"{read_only:false; response_revision:462; number_of_response:1; }","duration":"176.753813ms","start":"2025-11-22T00:31:47.437362Z","end":"2025-11-22T00:31:47.614116Z","steps":["trace[1548252938] 'process raft request'  (duration: 130.305911ms)","trace[1548252938] 'compare'  (duration: 46.356016ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:31:47.826767Z","caller":"traceutil/trace.go:171","msg":"trace[55112707] transaction","detail":"{read_only:false; response_revision:463; number_of_response:1; }","duration":"208.948211ms","start":"2025-11-22T00:31:47.617796Z","end":"2025-11-22T00:31:47.826744Z","steps":["trace[55112707] 'process raft request'  (duration: 129.022116ms)","trace[55112707] 'compare'  (duration: 79.82714ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:31:48.097469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.49479ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221089893780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" mod_revision:462 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" value_size:658 lease:499225184235117955 >> failure:<request_range:<key:\"/registry/events/default/old-k8s-version-377321.187a2cd5528ccda7\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:31:48.097534Z","caller":"traceutil/trace.go:171","msg":"trace[461512084] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"261.090269ms","start":"2025-11-22T00:31:47.836434Z","end":"2025-11-22T00:31:48.097524Z","steps":["trace[461512084] 'process raft request'  (duration: 144.383619ms)","trace[461512084] 'compare'  (duration: 116.380233ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:32:32 up  1:15,  0 user,  load average: 3.32, 3.04, 1.86
	Linux old-k8s-version-377321 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [570f113a27a5135a9bb473c8bdf01eb25f09ab8108a4e98dd642e15f17472989] <==
	I1122 00:31:43.439418       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:31:43.439685       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:31:43.439865       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:31:43.439883       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:31:43.439911       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:31:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:31:43.642659       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:31:43.642729       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:31:43.642746       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:31:43.735352       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:31:44.171898       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:31:44.171936       1 metrics.go:72] Registering metrics
	I1122 00:31:44.172006       1 controller.go:711] "Syncing nftables rules"
	I1122 00:31:53.643176       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:31:53.643239       1 main.go:301] handling current node
	I1122 00:32:03.643166       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:03.643201       1 main.go:301] handling current node
	I1122 00:32:13.643144       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:13.643175       1 main.go:301] handling current node
	I1122 00:32:23.643003       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:32:23.643035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5819251d36741016f113d53581c7c528ace5865eeb58ffe60e69f44d077e7cd2] <==
	I1122 00:31:41.756588       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 00:31:41.851128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:31:41.851144       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:31:41.851673       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1122 00:31:41.852648       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:31:41.852870       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:31:41.852897       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:31:41.852908       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:31:41.852919       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:31:41.852927       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:31:41.854174       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1122 00:31:41.854190       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:31:41.865398       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:31:41.881510       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:31:42.641992       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:31:42.676434       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:31:42.695540       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:31:42.702730       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:31:42.712532       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:31:42.751563       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.229.37"}
	I1122 00:31:42.754044       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:31:42.767562       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.161.21"}
	I1122 00:31:54.771674       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1122 00:31:54.923253       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:31:55.024393       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ed98561b5f5aba5d27a95290d74bdb9ae0ac348ec62233efd0e83b347c5ad42b] <==
	I1122 00:31:54.775743       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1122 00:31:54.776778       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1122 00:31:54.985411       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8fvls"
	I1122 00:31:54.987756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="469.375773ms"
	I1122 00:31:54.988499       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.345µs"
	I1122 00:31:54.989242       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	I1122 00:31:54.991419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="216.038779ms"
	I1122 00:31:54.998491       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="221.317474ms"
	I1122 00:31:55.003758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.223447ms"
	I1122 00:31:55.003840       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="43.82µs"
	I1122 00:31:55.008404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.754198ms"
	I1122 00:31:55.008494       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.187µs"
	I1122 00:31:55.020681       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.805µs"
	I1122 00:31:55.039678       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:31:55.107476       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:31:55.107512       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:32:01.915344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.099813ms"
	I1122 00:32:01.915974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="562.153µs"
	I1122 00:32:03.904658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.662µs"
	I1122 00:32:04.917677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="600.842µs"
	I1122 00:32:05.914209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.516µs"
	I1122 00:32:14.544005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.429727ms"
	I1122 00:32:14.544140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.39µs"
	I1122 00:32:21.950834       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="107.581µs"
	I1122 00:32:25.907899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="105.132µs"
	
	
	==> kube-proxy [6a1f00984a7dff4ce68585b4b0994ccd7b263abf46aef826150cbb2693c2b895] <==
	I1122 00:31:43.233815       1 server_others.go:69] "Using iptables proxy"
	I1122 00:31:43.247029       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:31:43.266393       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:31:43.268780       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:31:43.268806       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:31:43.268812       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:31:43.268839       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:31:43.269128       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:31:43.269195       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:31:43.269826       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:31:43.269832       1 config.go:188] "Starting service config controller"
	I1122 00:31:43.269874       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:31:43.269876       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:31:43.269960       1 config.go:315] "Starting node config controller"
	I1122 00:31:43.269969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:31:43.370165       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:31:43.370196       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:31:43.370165       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ab6a019fd3f49e6fab48be38ce5872af37de1804d6bf8f07d05a6d98aaedd575] <==
	I1122 00:31:39.948334       1 serving.go:348] Generated self-signed cert in-memory
	W1122 00:31:41.765635       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:31:41.765669       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:31:41.765682       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:31:41.765690       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:31:41.789402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:31:41.789486       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:31:41.791902       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:31:41.791954       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:31:41.794213       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:31:41.794277       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:31:41.892544       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:31:54 old-k8s-version-377321 kubelet[727]: I1122 00:31:54.996461     727 topology_manager.go:215] "Topology Admit Handler" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059588     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/80fdd4a9-2931-48e7-8084-644a5da2b47b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8fvls\" (UID: \"80fdd4a9-2931-48e7-8084-644a5da2b47b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059649     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7455bdaf-04f4-4187-a0e5-e2633acf1e1e-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-mj7xq\" (UID: \"7455bdaf-04f4-4187-a0e5-e2633acf1e1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059693     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94nk\" (UniqueName: \"kubernetes.io/projected/7455bdaf-04f4-4187-a0e5-e2633acf1e1e-kube-api-access-v94nk\") pod \"dashboard-metrics-scraper-5f989dc9cf-mj7xq\" (UID: \"7455bdaf-04f4-4187-a0e5-e2633acf1e1e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq"
	Nov 22 00:31:55 old-k8s-version-377321 kubelet[727]: I1122 00:31:55.059839     727 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75nt\" (UniqueName: \"kubernetes.io/projected/80fdd4a9-2931-48e7-8084-644a5da2b47b-kube-api-access-g75nt\") pod \"kubernetes-dashboard-8694d4445c-8fvls\" (UID: \"80fdd4a9-2931-48e7-8084-644a5da2b47b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls"
	Nov 22 00:32:03 old-k8s-version-377321 kubelet[727]: I1122 00:32:03.893380     727 scope.go:117] "RemoveContainer" containerID="cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817"
	Nov 22 00:32:03 old-k8s-version-377321 kubelet[727]: I1122 00:32:03.904586     727 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8fvls" podStartSLOduration=4.669567495 podCreationTimestamp="2025-11-22 00:31:54 +0000 UTC" firstStartedPulling="2025-11-22 00:31:55.921048277 +0000 UTC m=+17.215118139" lastFinishedPulling="2025-11-22 00:32:01.156015745 +0000 UTC m=+22.450085596" observedRunningTime="2025-11-22 00:32:01.905762545 +0000 UTC m=+23.199832413" watchObservedRunningTime="2025-11-22 00:32:03.904534952 +0000 UTC m=+25.198604824"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: I1122 00:32:04.898567     727 scope.go:117] "RemoveContainer" containerID="cbb7ae0c3ce0a915c8fce0496a4994600161a185f84d868554dcf1510250a817"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: I1122 00:32:04.898900     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:04 old-k8s-version-377321 kubelet[727]: E1122 00:32:04.899526     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:05 old-k8s-version-377321 kubelet[727]: I1122 00:32:05.902598     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:05 old-k8s-version-377321 kubelet[727]: E1122 00:32:05.902962     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:06 old-k8s-version-377321 kubelet[727]: I1122 00:32:06.905338     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:06 old-k8s-version-377321 kubelet[727]: E1122 00:32:06.905625     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:13 old-k8s-version-377321 kubelet[727]: I1122 00:32:13.920534     727 scope.go:117] "RemoveContainer" containerID="6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.796955     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.940627     727 scope.go:117] "RemoveContainer" containerID="2bc31ae3c5f85f8677f2bd3963dacd958e4d2494b6b49f67fdfb9a0c31c680bb"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: I1122 00:32:21.940829     727 scope.go:117] "RemoveContainer" containerID="1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	Nov 22 00:32:21 old-k8s-version-377321 kubelet[727]: E1122 00:32:21.941221     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:25 old-k8s-version-377321 kubelet[727]: I1122 00:32:25.898833     727 scope.go:117] "RemoveContainer" containerID="1159f8806d56e7efa54dd600adca00cd67da4d0d67ec86f0feef41418ddc6a5d"
	Nov 22 00:32:25 old-k8s-version-377321 kubelet[727]: E1122 00:32:25.899229     727 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-mj7xq_kubernetes-dashboard(7455bdaf-04f4-4187-a0e5-e2633acf1e1e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-mj7xq" podUID="7455bdaf-04f4-4187-a0e5-e2633acf1e1e"
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:32:28 old-k8s-version-377321 systemd[1]: kubelet.service: Consumed 1.369s CPU time.
	
	
	==> kubernetes-dashboard [d3398b58126a8fcaaa90af41bb9b636f054fe29a545311e069c0bf53e69969c0] <==
	2025/11/22 00:32:01 Using namespace: kubernetes-dashboard
	2025/11/22 00:32:01 Using in-cluster config to connect to apiserver
	2025/11/22 00:32:01 Using secret token for csrf signing
	2025/11/22 00:32:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:32:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:32:01 Successful initial request to the apiserver, version: v1.28.0
	2025/11/22 00:32:01 Generating JWE encryption key
	2025/11/22 00:32:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:32:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:32:01 Initializing JWE encryption key from synchronized object
	2025/11/22 00:32:01 Creating in-cluster Sidecar client
	2025/11/22 00:32:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:01 Serving insecurely on HTTP port: 9090
	2025/11/22 00:32:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:01 Starting overwatch
	
	
	==> storage-provisioner [6fd900059ec31ad554d574671f6b2f24e47fc4c2cfa17b61d25d410687f7c02f] <==
	I1122 00:31:43.173344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:32:13.176295       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [93bfe3b02b3028d26df5463ec28e27751aec1aaa96f6b52964a50d6b63a63975] <==
	I1122 00:32:13.969873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:32:13.977928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:32:13.977969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:32:31.372112       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:32:31.372391       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-377321_c0353489-a7e2-4034-8193-50438237000c!
	I1122 00:32:31.372537       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfb541a1-68f5-4661-b231-b0efc70ccf66", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-377321_c0353489-a7e2-4034-8193-50438237000c became leader
	I1122 00:32:31.473355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-377321_c0353489-a7e2-4034-8193-50438237000c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-377321 -n old-k8s-version-377321
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-377321 -n old-k8s-version-377321: exit status 2 (341.023813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-377321 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-983546 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-983546 --alsologtostderr -v=1: exit status 80 (1.710934835s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-983546 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:32:46.367215  262955 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:32:46.367317  262955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:46.367325  262955 out.go:374] Setting ErrFile to fd 2...
	I1122 00:32:46.367330  262955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:46.367645  262955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:32:46.367946  262955 out.go:368] Setting JSON to false
	I1122 00:32:46.367972  262955 mustload.go:66] Loading cluster: no-preload-983546
	I1122 00:32:46.368344  262955 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:46.368800  262955 cli_runner.go:164] Run: docker container inspect no-preload-983546 --format={{.State.Status}}
	I1122 00:32:46.387291  262955 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:32:46.387538  262955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:46.442929  262955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-22 00:32:46.433908004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:46.443613  262955 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-983546 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:32:46.446437  262955 out.go:179] * Pausing node no-preload-983546 ... 
	I1122 00:32:46.447606  262955 host.go:66] Checking if "no-preload-983546" exists ...
	I1122 00:32:46.447838  262955 ssh_runner.go:195] Run: systemctl --version
	I1122 00:32:46.447876  262955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983546
	I1122 00:32:46.464901  262955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/no-preload-983546/id_rsa Username:docker}
	I1122 00:32:46.556852  262955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:46.568815  262955 pause.go:52] kubelet running: true
	I1122 00:32:46.568876  262955 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:46.731635  262955 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:46.731722  262955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:46.796914  262955 cri.go:89] found id: "8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b"
	I1122 00:32:46.796933  262955 cri.go:89] found id: "76af8b2fe949b0dc8efe070c8a594ad008ad64c67858fd8f0ae558a32f45fe76"
	I1122 00:32:46.796938  262955 cri.go:89] found id: "f501552abfbef5c6045420811113cd307ddce87f32aa846610d14b395f7e0108"
	I1122 00:32:46.796941  262955 cri.go:89] found id: "2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d"
	I1122 00:32:46.796944  262955 cri.go:89] found id: "95fad35fc65181b4daf9312acf2f79014f290dccfe9528995d782fe1fbb107aa"
	I1122 00:32:46.796948  262955 cri.go:89] found id: "15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19"
	I1122 00:32:46.796951  262955 cri.go:89] found id: "2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d"
	I1122 00:32:46.796953  262955 cri.go:89] found id: "2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22"
	I1122 00:32:46.796956  262955 cri.go:89] found id: "748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076"
	I1122 00:32:46.796964  262955 cri.go:89] found id: "44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	I1122 00:32:46.796967  262955 cri.go:89] found id: "0065675f0b4131b7a62e9267f42c240303e8fb7718d81fed3827230d2167933b"
	I1122 00:32:46.796970  262955 cri.go:89] found id: ""
	I1122 00:32:46.797000  262955 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:46.808272  262955 retry.go:31] will retry after 221.961ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:46Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:32:47.030717  262955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:47.046155  262955 pause.go:52] kubelet running: false
	I1122 00:32:47.046222  262955 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:47.192410  262955 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:47.192536  262955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:47.259467  262955 cri.go:89] found id: "8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b"
	I1122 00:32:47.259490  262955 cri.go:89] found id: "76af8b2fe949b0dc8efe070c8a594ad008ad64c67858fd8f0ae558a32f45fe76"
	I1122 00:32:47.259497  262955 cri.go:89] found id: "f501552abfbef5c6045420811113cd307ddce87f32aa846610d14b395f7e0108"
	I1122 00:32:47.259502  262955 cri.go:89] found id: "2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d"
	I1122 00:32:47.259507  262955 cri.go:89] found id: "95fad35fc65181b4daf9312acf2f79014f290dccfe9528995d782fe1fbb107aa"
	I1122 00:32:47.259515  262955 cri.go:89] found id: "15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19"
	I1122 00:32:47.259519  262955 cri.go:89] found id: "2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d"
	I1122 00:32:47.259524  262955 cri.go:89] found id: "2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22"
	I1122 00:32:47.259528  262955 cri.go:89] found id: "748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076"
	I1122 00:32:47.259550  262955 cri.go:89] found id: "44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	I1122 00:32:47.259559  262955 cri.go:89] found id: "0065675f0b4131b7a62e9267f42c240303e8fb7718d81fed3827230d2167933b"
	I1122 00:32:47.259563  262955 cri.go:89] found id: ""
	I1122 00:32:47.259604  262955 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:47.271528  262955 retry.go:31] will retry after 496.932305ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:47Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:32:47.769247  262955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:32:47.781580  262955 pause.go:52] kubelet running: false
	I1122 00:32:47.781635  262955 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:32:47.927524  262955 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:32:47.927614  262955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:32:47.996022  262955 cri.go:89] found id: "8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b"
	I1122 00:32:47.996048  262955 cri.go:89] found id: "76af8b2fe949b0dc8efe070c8a594ad008ad64c67858fd8f0ae558a32f45fe76"
	I1122 00:32:47.996077  262955 cri.go:89] found id: "f501552abfbef5c6045420811113cd307ddce87f32aa846610d14b395f7e0108"
	I1122 00:32:47.996082  262955 cri.go:89] found id: "2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d"
	I1122 00:32:47.996087  262955 cri.go:89] found id: "95fad35fc65181b4daf9312acf2f79014f290dccfe9528995d782fe1fbb107aa"
	I1122 00:32:47.996092  262955 cri.go:89] found id: "15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19"
	I1122 00:32:47.996095  262955 cri.go:89] found id: "2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d"
	I1122 00:32:47.996099  262955 cri.go:89] found id: "2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22"
	I1122 00:32:47.996104  262955 cri.go:89] found id: "748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076"
	I1122 00:32:47.996115  262955 cri.go:89] found id: "44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	I1122 00:32:47.996120  262955 cri.go:89] found id: "0065675f0b4131b7a62e9267f42c240303e8fb7718d81fed3827230d2167933b"
	I1122 00:32:47.996124  262955 cri.go:89] found id: ""
	I1122 00:32:47.996174  262955 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:32:48.009874  262955 out.go:203] 
	W1122 00:32:48.010860  262955 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:32:48.010878  262955 out.go:285] * 
	* 
	W1122 00:32:48.014934  262955 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:32:48.016031  262955 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-983546 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-983546
helpers_test.go:243: (dbg) docker inspect no-preload-983546:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	        "Created": "2025-11-22T00:30:36.232639451Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:31:50.874168121Z",
	            "FinishedAt": "2025-11-22T00:31:49.972307889Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352-json.log",
	        "Name": "/no-preload-983546",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-983546:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-983546",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	                "LowerDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-983546",
	                "Source": "/var/lib/docker/volumes/no-preload-983546/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-983546",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-983546",
	                "name.minikube.sigs.k8s.io": "no-preload-983546",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e3b2fe3ffd629dc8078f7123b89cc10981904efd5551b93ce827bde19ea063da",
	            "SandboxKey": "/var/run/docker/netns/e3b2fe3ffd62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-983546": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31079b5ab75bb84607cf8165e3a4b768618e4392cb34bdd501083b6a67908eda",
	                    "EndpointID": "c2c4b0f42f4e343dd244d3976ea48a347498a84d9b70854e18f267bbd0a245ef",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:13:f4:24:27:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-983546",
	                        "c2d293e7736f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546: exit status 2 (313.500238ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-983546 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-983546 logs -n 25: (1.036789472s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:32:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:32:36.522235  261434 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:32:36.522463  261434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:36.522471  261434 out.go:374] Setting ErrFile to fd 2...
	I1122 00:32:36.522476  261434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:36.522653  261434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:32:36.523087  261434 out.go:368] Setting JSON to false
	I1122 00:32:36.524268  261434 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4505,"bootTime":1763767051,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:32:36.524316  261434 start.go:143] virtualization: kvm guest
	I1122 00:32:36.525966  261434 out.go:179] * [default-k8s-diff-port-046175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:32:36.526977  261434 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:32:36.526974  261434 notify.go:221] Checking for updates...
	I1122 00:32:36.528839  261434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:32:36.529767  261434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:32:36.530745  261434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:32:36.531855  261434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:32:36.535213  261434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:32:36.536590  261434 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536673  261434 config.go:182] Loaded profile config "kubernetes-upgrade-619859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536748  261434 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536812  261434 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:32:36.560163  261434 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:32:36.560334  261434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:36.623309  261434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:32:36.612661835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:36.623445  261434 docker.go:319] overlay module found
	I1122 00:32:36.625329  261434 out.go:179] * Using the docker driver based on user configuration
	I1122 00:32:36.626314  261434 start.go:309] selected driver: docker
	I1122 00:32:36.626325  261434 start.go:930] validating driver "docker" against <nil>
	I1122 00:32:36.626335  261434 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:32:36.626894  261434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:36.687219  261434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:32:36.677483101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:36.687408  261434 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:32:36.687646  261434 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:32:36.689043  261434 out.go:179] * Using Docker driver with root privileges
	I1122 00:32:36.690093  261434 cni.go:84] Creating CNI manager for ""
	I1122 00:32:36.690164  261434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:36.690179  261434 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:32:36.690251  261434 start.go:353] cluster config:
	{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:32:36.691339  261434 out.go:179] * Starting "default-k8s-diff-port-046175" primary control-plane node in "default-k8s-diff-port-046175" cluster
	I1122 00:32:36.692215  261434 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:32:36.693206  261434 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:32:36.694137  261434 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:36.694170  261434 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:32:36.694187  261434 cache.go:65] Caching tarball of preloaded images
	I1122 00:32:36.694228  261434 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:32:36.694288  261434 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:32:36.694305  261434 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:32:36.694423  261434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:32:36.694456  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json: {Name:mk5d2f83b350e180ea73c8b8de614cee9a70b3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:36.714319  261434 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:32:36.714340  261434 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:32:36.714354  261434 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:32:36.714373  261434 start.go:360] acquireMachinesLock for default-k8s-diff-port-046175: {Name:mkead8b34d9557aba416ceaab7176eb30fd80326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:32:36.714446  261434 start.go:364] duration metric: took 60.196µs to acquireMachinesLock for "default-k8s-diff-port-046175"
	I1122 00:32:36.714470  261434 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:32:36.714521  261434 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:32:35.497349  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:37.498237  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:34.926466  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1122 00:32:34.926546  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:34.926631  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:34.962129  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:34.962156  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:34.962162  218533 cri.go:89] found id: ""
	I1122 00:32:34.962173  218533 logs.go:282] 2 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:34.962239  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:34.966901  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:34.971260  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:34.971335  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:34.999314  218533 cri.go:89] found id: ""
	I1122 00:32:34.999339  218533 logs.go:282] 0 containers: []
	W1122 00:32:34.999349  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:34.999356  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:34.999408  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:35.029202  218533 cri.go:89] found id: ""
	I1122 00:32:35.029235  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.029247  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:35.029260  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:35.029323  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:35.056223  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:35.056247  218533 cri.go:89] found id: ""
	I1122 00:32:35.056257  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:35.056313  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:35.060360  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:35.060425  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:35.085727  218533 cri.go:89] found id: ""
	I1122 00:32:35.085752  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.085762  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:35.085770  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:35.085819  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:35.111175  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:35.111198  218533 cri.go:89] found id: ""
	I1122 00:32:35.111207  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:35.111259  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:35.115734  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:35.115795  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:35.143384  218533 cri.go:89] found id: ""
	I1122 00:32:35.143408  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.143418  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:35.143426  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:35.143480  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:35.169885  218533 cri.go:89] found id: ""
	I1122 00:32:35.169908  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.169915  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:35.169931  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:35.169944  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:35.196844  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:35.196869  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:35.261683  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:35.261709  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:35.352402  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:35.352432  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:35.366658  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:32:35.366684  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:35.399065  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:35.399098  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:35.432742  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:35.432772  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:35.486638  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:35.486671  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:35.518194  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:35.518219  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1122 00:32:36.715928  261434 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:32:36.716160  261434 start.go:159] libmachine.API.Create for "default-k8s-diff-port-046175" (driver="docker")
	I1122 00:32:36.716190  261434 client.go:173] LocalClient.Create starting
	I1122 00:32:36.716241  261434 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:32:36.716278  261434 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:36.716300  261434 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:36.716362  261434 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:32:36.716389  261434 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:36.716401  261434 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:36.716689  261434 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-046175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:32:36.735043  261434 cli_runner.go:211] docker network inspect default-k8s-diff-port-046175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:32:36.735121  261434 network_create.go:284] running [docker network inspect default-k8s-diff-port-046175] to gather additional debugging logs...
	I1122 00:32:36.735141  261434 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-046175
	W1122 00:32:36.750941  261434 cli_runner.go:211] docker network inspect default-k8s-diff-port-046175 returned with exit code 1
	I1122 00:32:36.750965  261434 network_create.go:287] error running [docker network inspect default-k8s-diff-port-046175]: docker network inspect default-k8s-diff-port-046175: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-046175 not found
	I1122 00:32:36.750980  261434 network_create.go:289] output of [docker network inspect default-k8s-diff-port-046175]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-046175 not found
	
	** /stderr **
	I1122 00:32:36.751156  261434 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:32:36.769452  261434 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:32:36.770339  261434 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:32:36.771280  261434 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:32:36.772001  261434 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31079b5ab75b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:64:bf:b9:3e:b5} reservation:<nil>}
	I1122 00:32:36.773011  261434 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e479b0}
	I1122 00:32:36.773037  261434 network_create.go:124] attempt to create docker network default-k8s-diff-port-046175 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:32:36.773112  261434 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 default-k8s-diff-port-046175
	I1122 00:32:36.819578  261434 network_create.go:108] docker network default-k8s-diff-port-046175 192.168.85.0/24 created
	I1122 00:32:36.819613  261434 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-046175" container
	I1122 00:32:36.819677  261434 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:32:36.838474  261434 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-046175 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:32:36.855384  261434 oci.go:103] Successfully created a docker volume default-k8s-diff-port-046175
	I1122 00:32:36.855448  261434 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-046175-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --entrypoint /usr/bin/test -v default-k8s-diff-port-046175:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:32:37.230374  261434 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-046175
	I1122 00:32:37.230445  261434 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:37.230460  261434 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:32:37.230531  261434 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-046175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:32:39.997933  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:41.998569  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:41.586856  261434 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-046175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.356272502s)
	I1122 00:32:41.586884  261434 kic.go:203] duration metric: took 4.35642215s to extract preloaded images to volume ...
	W1122 00:32:41.586956  261434 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:32:41.586986  261434 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:32:41.587021  261434 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:32:41.639445  261434 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-046175 --name default-k8s-diff-port-046175 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --network default-k8s-diff-port-046175 --ip 192.168.85.2 --volume default-k8s-diff-port-046175:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:32:41.930447  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Running}}
	I1122 00:32:41.949044  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:41.968158  261434 cli_runner.go:164] Run: docker exec default-k8s-diff-port-046175 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:32:42.016317  261434 oci.go:144] the created container "default-k8s-diff-port-046175" has a running status.
	I1122 00:32:42.016344  261434 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa...
	I1122 00:32:42.060753  261434 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:32:42.088508  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:42.108038  261434 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:32:42.108071  261434 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-046175 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:32:42.147297  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:42.171019  261434 machine.go:94] provisionDockerMachine start ...
	I1122 00:32:42.171152  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:42.189403  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:42.189689  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:42.189703  261434 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:32:42.190364  261434 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59476->127.0.0.1:33078: read: connection reset by peer
	I1122 00:32:45.312424  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-046175
	
	I1122 00:32:45.312451  261434 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-046175"
	I1122 00:32:45.312520  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.330094  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.330332  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.330346  261434 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-046175 && echo "default-k8s-diff-port-046175" | sudo tee /etc/hostname
	I1122 00:32:45.458700  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-046175
	
	I1122 00:32:45.458789  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.476551  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.476883  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.476918  261434 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-046175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-046175/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-046175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:32:45.596020  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:32:45.596047  261434 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:32:45.596102  261434 ubuntu.go:190] setting up certificates
	I1122 00:32:45.596117  261434 provision.go:84] configureAuth start
	I1122 00:32:45.596196  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:45.613935  261434 provision.go:143] copyHostCerts
	I1122 00:32:45.613993  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:32:45.614007  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:32:45.614104  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:32:45.614215  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:32:45.614227  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:32:45.614271  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:32:45.614356  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:32:45.614365  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:32:45.614403  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:32:45.614477  261434 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-046175 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-046175 localhost minikube]
	I1122 00:32:45.748479  261434 provision.go:177] copyRemoteCerts
	I1122 00:32:45.748536  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:32:45.748585  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.765839  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:45.855090  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:32:45.873847  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:32:45.891430  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:32:45.907811  261434 provision.go:87] duration metric: took 311.67862ms to configureAuth
	I1122 00:32:45.907834  261434 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:32:45.908001  261434 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:45.908133  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.924839  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.925136  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.925162  261434 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:32:46.199313  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:32:46.199356  261434 machine.go:97] duration metric: took 4.028299103s to provisionDockerMachine
	I1122 00:32:46.199370  261434 client.go:176] duration metric: took 9.483172544s to LocalClient.Create
	I1122 00:32:46.199385  261434 start.go:167] duration metric: took 9.483222923s to libmachine.API.Create "default-k8s-diff-port-046175"
	I1122 00:32:46.199398  261434 start.go:293] postStartSetup for "default-k8s-diff-port-046175" (driver="docker")
	I1122 00:32:46.199415  261434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:32:46.199492  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:32:46.199546  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.216852  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.307287  261434 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:32:46.311233  261434 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:32:46.311263  261434 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:32:46.311274  261434 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:32:46.311370  261434 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:32:46.311486  261434 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:32:46.311641  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:32:46.319687  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:32:46.340280  261434 start.go:296] duration metric: took 140.866097ms for postStartSetup
	I1122 00:32:46.340598  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:46.360307  261434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:32:46.360553  261434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:32:46.360622  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.379149  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.470003  261434 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:32:46.474659  261434 start.go:128] duration metric: took 9.760123533s to createHost
	I1122 00:32:46.474684  261434 start.go:83] releasing machines lock for "default-k8s-diff-port-046175", held for 9.760221202s
	I1122 00:32:46.474746  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:46.492719  261434 ssh_runner.go:195] Run: cat /version.json
	I1122 00:32:46.492806  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.492834  261434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:32:46.492903  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.513217  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.514217  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.675141  261434 ssh_runner.go:195] Run: systemctl --version
	I1122 00:32:46.681111  261434 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:32:46.715248  261434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:32:46.719678  261434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:32:46.719736  261434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:32:46.744248  261434 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:32:46.744267  261434 start.go:496] detecting cgroup driver to use...
	I1122 00:32:46.744299  261434 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:32:46.744342  261434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:32:46.760940  261434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:32:46.773332  261434 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:32:46.773392  261434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:32:46.791047  261434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:32:46.809232  261434 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:32:46.893087  261434 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:32:46.978448  261434 docker.go:234] disabling docker service ...
	I1122 00:32:46.978518  261434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:32:46.995633  261434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:32:47.007691  261434 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:32:47.100171  261434 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:32:47.180424  261434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:32:47.191743  261434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:32:47.205675  261434 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:32:47.205726  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.215825  261434 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:32:47.215888  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.225347  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.233675  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.242879  261434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:32:47.250871  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.260349  261434 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.273729  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.282160  261434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:32:47.289044  261434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:32:47.296035  261434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:32:47.374523  261434 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:32:47.504290  261434 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:32:47.504350  261434 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:32:47.508470  261434 start.go:564] Will wait 60s for crictl version
	I1122 00:32:47.508537  261434 ssh_runner.go:195] Run: which crictl
	I1122 00:32:47.511925  261434 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:32:47.535534  261434 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:32:47.535612  261434 ssh_runner.go:195] Run: crio --version
	I1122 00:32:47.561684  261434 ssh_runner.go:195] Run: crio --version
	I1122 00:32:47.589874  261434 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:32:44.497451  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:46.497968  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.817146075Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.820982272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.821004735Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.985656077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8005889d-c4ae-4e7b-ad26-d6f3d525f3b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.98819799Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=369507f9-4ef0-4df7-922a-db08157e2fbe name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.991398249Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=34db75e7-a582-4f8e-9b58-86905a64afc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.991543137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.999832969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.000301596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.024683554Z" level=info msg="Created container 44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=34db75e7-a582-4f8e-9b58-86905a64afc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.025214978Z" level=info msg="Starting container: 44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d" id=39536217-c916-4dd3-acfa-3d3ee896e98a name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.027011158Z" level=info msg="Started container" PID=1756 containerID=44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper id=39536217-c916-4dd3-acfa-3d3ee896e98a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3315e63eff016597fd30c0a2ca1bd94bf8b1d36123649ed4305d3928066be56
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.093208481Z" level=info msg="Removing container: fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492" id=2cbda79c-7b52-432e-a46e-a3bf4ee4494c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.102336427Z" level=info msg="Removed container fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=2cbda79c-7b52-432e-a46e-a3bf4ee4494c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.119987382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e28bee9f-0d0b-4d1e-ac28-d8b0ace47dbe name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.120895377Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=00240263-c42d-4f12-9a2c-61e5745b3c0c name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.12201947Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1089f06e-2aad-4abe-b05c-0797f50801cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.122176917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127401898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127588488Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/693fb75a5d6b13bb5d394ad2db2386c50a71f77028983203349861908fe8b047/merged/etc/passwd: no such file or directory"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127621476Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/693fb75a5d6b13bb5d394ad2db2386c50a71f77028983203349861908fe8b047/merged/etc/group: no such file or directory"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.12789732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.157278433Z" level=info msg="Created container 8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b: kube-system/storage-provisioner/storage-provisioner" id=1089f06e-2aad-4abe-b05c-0797f50801cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.15778741Z" level=info msg="Starting container: 8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b" id=990d0b93-4751-4977-98e9-4dab6975c0f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.159988952Z" level=info msg="Started container" PID=1773 containerID=8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b description=kube-system/storage-provisioner/storage-provisioner id=990d0b93-4751-4977-98e9-4dab6975c0f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90785d50afe9440a8628659601748f3672892a30c63659f8278d4a2b5e597769
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c92a15018c69       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   90785d50afe94       storage-provisioner                          kube-system
	44f5b1a4bec93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   c3315e63eff01       dashboard-metrics-scraper-6ffb444bf9-98spc   kubernetes-dashboard
	0065675f0b413       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   ae386be23023b       kubernetes-dashboard-855c9754f9-fb2ss        kubernetes-dashboard
	76af8b2fe949b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   74277d2daaea7       coredns-66bc5c9577-4psr2                     kube-system
	8e69bcc2ba825       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   e1a2b790e52d8       busybox                                      default
	f501552abfbef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           47 seconds ago      Running             kube-proxy                  0                   4c649c734c20f       kube-proxy-gnlfp                             kube-system
	2365588cd931c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   90785d50afe94       storage-provisioner                          kube-system
	95fad35fc6518       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   00788dafb2a97       kindnet-rpr2g                                kube-system
	15ff8ca6c3bd3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   cafc401b5d9ea       etcd-no-preload-983546                       kube-system
	2395f0fc0ddc2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   d0ac9fbab2845       kube-apiserver-no-preload-983546             kube-system
	2e71abd401006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   8c355ae6a5bcb       kube-controller-manager-no-preload-983546    kube-system
	748b8383a47b0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   ee44ee75390a7       kube-scheduler-no-preload-983546             kube-system
	
	
	==> coredns [76af8b2fe949b0dc8efe070c8a594ad008ad64c67858fd8f0ae558a32f45fe76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46768 - 11999 "HINFO IN 6219612061679094651.3936230835065146805. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091587976s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-983546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-983546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-983546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_31_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:31:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-983546
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:32:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:31:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-983546
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                1d18d6ff-8b0a-4769-8dee-cdd1e29786a3
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-66bc5c9577-4psr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-no-preload-983546                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-rpr2g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-no-preload-983546              250m (3%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-no-preload-983546     200m (2%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-gnlfp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-no-preload-983546              100m (1%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98spc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fb2ss         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 99s                kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s               kubelet          Node no-preload-983546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s               kubelet          Node no-preload-983546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s               kubelet          Node no-preload-983546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           102s               node-controller  Node no-preload-983546 event: Registered Node no-preload-983546 in Controller
	  Normal  NodeReady                88s                kubelet          Node no-preload-983546 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node no-preload-983546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node no-preload-983546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)  kubelet          Node no-preload-983546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node no-preload-983546 event: Registered Node no-preload-983546 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19] <==
	{"level":"warn","ts":"2025-11-22T00:31:59.551566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.553649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.565502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.575711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.592128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.608666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.616169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.631872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.643140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.662297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.672862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.681600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.690860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.701986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.711853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.724580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.732467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.743857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.757182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.766130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.772240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.780787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51596","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:32:41.210377Z","caller":"traceutil/trace.go:172","msg":"trace[111803095] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"110.776313ms","start":"2025-11-22T00:32:41.099583Z","end":"2025-11-22T00:32:41.210360Z","steps":["trace[111803095] 'process raft request'  (duration: 110.738735ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:41.210463Z","caller":"traceutil/trace.go:172","msg":"trace[1843632463] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"166.94869ms","start":"2025-11-22T00:32:41.043492Z","end":"2025-11-22T00:32:41.210441Z","steps":["trace[1843632463] 'process raft request'  (duration: 101.968494ms)","trace[1843632463] 'compare'  (duration: 64.75184ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:32:49 up  1:15,  0 user,  load average: 2.77, 2.94, 1.85
	Linux no-preload-983546 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [95fad35fc65181b4daf9312acf2f79014f290dccfe9528995d782fe1fbb107aa] <==
	I1122 00:32:01.590455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:32:01.590702       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:32:01.590854       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:32:01.590881       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:32:01.590904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:32:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:32:01.797398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:32:01.797552       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:32:01.797591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:32:01.799098       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:32:02.098184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:32:02.098221       1 metrics.go:72] Registering metrics
	I1122 00:32:02.098271       1 controller.go:711] "Syncing nftables rules"
	I1122 00:32:11.797815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:11.797868       1 main.go:301] handling current node
	I1122 00:32:21.797785       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:21.797839       1 main.go:301] handling current node
	I1122 00:32:31.797121       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:31.797156       1 main.go:301] handling current node
	I1122 00:32:41.804196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:41.804234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d] <==
	I1122 00:32:00.536693       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:32:00.536525       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:32:00.536864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:32:00.537399       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:32:00.540158       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:32:00.540191       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:32:00.540237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:32:00.540252       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:32:00.537379       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:32:00.550485       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:32:00.557127       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:32:00.564677       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:32:00.564744       1 policy_source.go:240] refreshing policies
	I1122 00:32:00.576262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:32:00.929614       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:32:00.961157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:32:00.981280       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:32:01.013102       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:32:01.022372       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:32:01.068079       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.41.115"}
	I1122 00:32:01.079791       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.190.148"}
	I1122 00:32:01.439985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:32:03.949752       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:32:04.146775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:32:04.495460       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22] <==
	I1122 00:32:03.869876       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:32:03.872111       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:32:03.883898       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:32:03.885282       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:32:03.886398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:32:03.888615       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:32:03.891859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:32:03.893038       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:32:03.893356       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:32:03.894409       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:32:03.894434       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:32:03.894647       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:32:03.898566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:32:03.906804       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:32:03.906873       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:32:03.906929       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:32:03.906936       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:32:03.906943       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:32:03.909083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:32:03.909181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:32:03.909250       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-983546"
	I1122 00:32:03.909322       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:32:03.911453       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:32:03.912704       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:32:03.912791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f501552abfbef5c6045420811113cd307ddce87f32aa846610d14b395f7e0108] <==
	I1122 00:32:01.410125       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:32:01.480314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:32:01.580578       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:32:01.580616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:32:01.580711       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:32:01.626084       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:32:01.626471       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:32:01.638483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:32:01.639395       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:32:01.639657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:32:01.644511       1 config.go:200] "Starting service config controller"
	I1122 00:32:01.644604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:32:01.647311       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:32:01.648969       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:32:01.650161       1 config.go:309] "Starting node config controller"
	I1122 00:32:01.650178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:32:01.650187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:32:01.650165       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:32:01.650209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:32:01.746175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:32:01.749458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:32:01.751319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076] <==
	I1122 00:31:58.572870       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:32:00.480426       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:32:00.480462       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:32:00.480475       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:32:00.480485       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:32:00.512926       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:32:00.513091       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:32:00.516835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.516877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.517383       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:32:00.517476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:32:00.617049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401255     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtg9\" (UniqueName: \"kubernetes.io/projected/4c34d579-4516-4496-b0a3-b9777443a826-kube-api-access-hxtg9\") pod \"dashboard-metrics-scraper-6ffb444bf9-98spc\" (UID: \"4c34d579-4516-4496-b0a3-b9777443a826\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc"
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401276     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c34d579-4516-4496-b0a3-b9777443a826-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-98spc\" (UID: \"4c34d579-4516-4496-b0a3-b9777443a826\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc"
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401294     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c903c3de-d57d-4f5d-9a37-79b8cd83c15c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fb2ss\" (UID: \"c903c3de-d57d-4f5d-9a37-79b8cd83c15c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fb2ss"
	Nov 22 00:32:07 no-preload-983546 kubelet[720]: I1122 00:32:07.051428     720 scope.go:117] "RemoveContainer" containerID="dfbff8208e29d897d0219cbadb2bd4f0849cc5732cc2858514f0c552e3fa1f63"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: I1122 00:32:08.055728     720 scope.go:117] "RemoveContainer" containerID="dfbff8208e29d897d0219cbadb2bd4f0849cc5732cc2858514f0c552e3fa1f63"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: I1122 00:32:08.055886     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: E1122 00:32:08.056096     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:09 no-preload-983546 kubelet[720]: I1122 00:32:09.061259     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:09 no-preload-983546 kubelet[720]: E1122 00:32:09.061456     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:10 no-preload-983546 kubelet[720]: I1122 00:32:10.064375     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:10 no-preload-983546 kubelet[720]: E1122 00:32:10.064607     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:11 no-preload-983546 kubelet[720]: I1122 00:32:11.095530     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fb2ss" podStartSLOduration=1.490162185 podStartE2EDuration="7.095511532s" podCreationTimestamp="2025-11-22 00:32:04 +0000 UTC" firstStartedPulling="2025-11-22 00:32:04.647808526 +0000 UTC m=+7.770520558" lastFinishedPulling="2025-11-22 00:32:10.253157873 +0000 UTC m=+13.375869905" observedRunningTime="2025-11-22 00:32:11.095305975 +0000 UTC m=+14.218018030" watchObservedRunningTime="2025-11-22 00:32:11.095511532 +0000 UTC m=+14.218223584"
	Nov 22 00:32:20 no-preload-983546 kubelet[720]: I1122 00:32:20.985198     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: I1122 00:32:21.091964     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: I1122 00:32:21.092212     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: E1122 00:32:21.092399     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:28 no-preload-983546 kubelet[720]: I1122 00:32:28.306137     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:28 no-preload-983546 kubelet[720]: E1122 00:32:28.306303     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:32 no-preload-983546 kubelet[720]: I1122 00:32:32.119574     720 scope.go:117] "RemoveContainer" containerID="2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d"
	Nov 22 00:32:40 no-preload-983546 kubelet[720]: I1122 00:32:40.985207     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:40 no-preload-983546 kubelet[720]: E1122 00:32:40.985374     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:46 no-preload-983546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:32:46 no-preload-983546 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:32:46 no-preload-983546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:32:46 no-preload-983546 systemd[1]: kubelet.service: Consumed 1.455s CPU time.
	
	
	==> kubernetes-dashboard [0065675f0b4131b7a62e9267f42c240303e8fb7718d81fed3827230d2167933b] <==
	2025/11/22 00:32:10 Using namespace: kubernetes-dashboard
	2025/11/22 00:32:10 Using in-cluster config to connect to apiserver
	2025/11/22 00:32:10 Using secret token for csrf signing
	2025/11/22 00:32:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:32:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:32:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:32:10 Generating JWE encryption key
	2025/11/22 00:32:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:32:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:32:10 Initializing JWE encryption key from synchronized object
	2025/11/22 00:32:10 Creating in-cluster Sidecar client
	2025/11/22 00:32:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:10 Serving insecurely on HTTP port: 9090
	2025/11/22 00:32:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:10 Starting overwatch
	
	
	==> storage-provisioner [2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d] <==
	I1122 00:32:01.362162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:32:31.365426       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b] <==
	I1122 00:32:32.173797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:32:32.181787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:32:32.181834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:32:32.183930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:35.638687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:39.899374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:43.497632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:46.552247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983546 -n no-preload-983546: exit status 2 (320.791125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-983546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-983546
helpers_test.go:243: (dbg) docker inspect no-preload-983546:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	        "Created": "2025-11-22T00:30:36.232639451Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253019,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:31:50.874168121Z",
	            "FinishedAt": "2025-11-22T00:31:49.972307889Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hostname",
	        "HostsPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/hosts",
	        "LogPath": "/var/lib/docker/containers/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352/c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352-json.log",
	        "Name": "/no-preload-983546",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-983546:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-983546",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c2d293e7736fe65170e6cfd040c09329134034bfda01d91b453ec0eec7c9e352",
	                "LowerDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc6393df2dd76d86cdad678716ec02d56105fb8e0b4e3f663f709c883352904b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-983546",
	                "Source": "/var/lib/docker/volumes/no-preload-983546/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-983546",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-983546",
	                "name.minikube.sigs.k8s.io": "no-preload-983546",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e3b2fe3ffd629dc8078f7123b89cc10981904efd5551b93ce827bde19ea063da",
	            "SandboxKey": "/var/run/docker/netns/e3b2fe3ffd62",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-983546": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "31079b5ab75bb84607cf8165e3a4b768618e4392cb34bdd501083b6a67908eda",
	                    "EndpointID": "c2c4b0f42f4e343dd244d3976ea48a347498a84d9b70854e18f267bbd0a245ef",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "0e:13:f4:24:27:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-983546",
	                        "c2d293e7736f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546: exit status 2 (322.347431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-983546 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-983546 logs -n 25: (1.127129519s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ stop    │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p NoKubernetes-953061 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ ssh     │ -p NoKubernetes-953061 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │                     │
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:32:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:32:36.522235  261434 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:32:36.522463  261434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:36.522471  261434 out.go:374] Setting ErrFile to fd 2...
	I1122 00:32:36.522476  261434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:36.522653  261434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:32:36.523087  261434 out.go:368] Setting JSON to false
	I1122 00:32:36.524268  261434 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4505,"bootTime":1763767051,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:32:36.524316  261434 start.go:143] virtualization: kvm guest
	I1122 00:32:36.525966  261434 out.go:179] * [default-k8s-diff-port-046175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:32:36.526977  261434 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:32:36.526974  261434 notify.go:221] Checking for updates...
	I1122 00:32:36.528839  261434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:32:36.529767  261434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:32:36.530745  261434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:32:36.531855  261434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:32:36.535213  261434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:32:36.536590  261434 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536673  261434 config.go:182] Loaded profile config "kubernetes-upgrade-619859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536748  261434 config.go:182] Loaded profile config "no-preload-983546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:36.536812  261434 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:32:36.560163  261434 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:32:36.560334  261434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:36.623309  261434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:32:36.612661835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:36.623445  261434 docker.go:319] overlay module found
	I1122 00:32:36.625329  261434 out.go:179] * Using the docker driver based on user configuration
	I1122 00:32:36.626314  261434 start.go:309] selected driver: docker
	I1122 00:32:36.626325  261434 start.go:930] validating driver "docker" against <nil>
	I1122 00:32:36.626335  261434 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:32:36.626894  261434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:36.687219  261434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:32:36.677483101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:36.687408  261434 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:32:36.687646  261434 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:32:36.689043  261434 out.go:179] * Using Docker driver with root privileges
	I1122 00:32:36.690093  261434 cni.go:84] Creating CNI manager for ""
	I1122 00:32:36.690164  261434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:36.690179  261434 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:32:36.690251  261434 start.go:353] cluster config:
	{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:32:36.691339  261434 out.go:179] * Starting "default-k8s-diff-port-046175" primary control-plane node in "default-k8s-diff-port-046175" cluster
	I1122 00:32:36.692215  261434 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:32:36.693206  261434 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:32:36.694137  261434 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:36.694170  261434 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:32:36.694187  261434 cache.go:65] Caching tarball of preloaded images
	I1122 00:32:36.694228  261434 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:32:36.694288  261434 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:32:36.694305  261434 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:32:36.694423  261434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:32:36.694456  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json: {Name:mk5d2f83b350e180ea73c8b8de614cee9a70b3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:36.714319  261434 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:32:36.714340  261434 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:32:36.714354  261434 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:32:36.714373  261434 start.go:360] acquireMachinesLock for default-k8s-diff-port-046175: {Name:mkead8b34d9557aba416ceaab7176eb30fd80326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:32:36.714446  261434 start.go:364] duration metric: took 60.196µs to acquireMachinesLock for "default-k8s-diff-port-046175"
	I1122 00:32:36.714470  261434 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:32:36.714521  261434 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:32:35.497349  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:37.498237  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:34.926466  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1122 00:32:34.926546  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:34.926631  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:34.962129  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:34.962156  218533 cri.go:89] found id: "28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:34.962162  218533 cri.go:89] found id: ""
	I1122 00:32:34.962173  218533 logs.go:282] 2 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5]
	I1122 00:32:34.962239  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:34.966901  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:34.971260  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:34.971335  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:34.999314  218533 cri.go:89] found id: ""
	I1122 00:32:34.999339  218533 logs.go:282] 0 containers: []
	W1122 00:32:34.999349  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:34.999356  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:34.999408  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:35.029202  218533 cri.go:89] found id: ""
	I1122 00:32:35.029235  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.029247  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:35.029260  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:35.029323  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:35.056223  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:35.056247  218533 cri.go:89] found id: ""
	I1122 00:32:35.056257  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:35.056313  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:35.060360  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:35.060425  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:35.085727  218533 cri.go:89] found id: ""
	I1122 00:32:35.085752  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.085762  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:35.085770  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:35.085819  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:35.111175  218533 cri.go:89] found id: "93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:35.111198  218533 cri.go:89] found id: ""
	I1122 00:32:35.111207  218533 logs.go:282] 1 containers: [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c]
	I1122 00:32:35.111259  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:35.115734  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:35.115795  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:35.143384  218533 cri.go:89] found id: ""
	I1122 00:32:35.143408  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.143418  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:35.143426  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:35.143480  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:35.169885  218533 cri.go:89] found id: ""
	I1122 00:32:35.169908  218533 logs.go:282] 0 containers: []
	W1122 00:32:35.169915  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:35.169931  218533 logs.go:123] Gathering logs for kube-controller-manager [93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c] ...
	I1122 00:32:35.169944  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 93ef08ab2de6815ca929b849b92172803b8bd1681be8a1bd79e80a8224b1494c"
	I1122 00:32:35.196844  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:35.196869  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:35.261683  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:35.261709  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:35.352402  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:35.352432  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:35.366658  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:32:35.366684  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:35.399065  218533 logs.go:123] Gathering logs for kube-apiserver [28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5] ...
	I1122 00:32:35.399098  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 28b99c190fe8f76828f7a8d3e8a1c4647f2f212ab60d942bd45d4c495951bbc5"
	I1122 00:32:35.432742  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:35.432772  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:35.486638  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:35.486671  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:35.518194  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:35.518219  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1122 00:32:36.715928  261434 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:32:36.716160  261434 start.go:159] libmachine.API.Create for "default-k8s-diff-port-046175" (driver="docker")
	I1122 00:32:36.716190  261434 client.go:173] LocalClient.Create starting
	I1122 00:32:36.716241  261434 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:32:36.716278  261434 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:36.716300  261434 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:36.716362  261434 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:32:36.716389  261434 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:36.716401  261434 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:36.716689  261434 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-046175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:32:36.735043  261434 cli_runner.go:211] docker network inspect default-k8s-diff-port-046175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:32:36.735121  261434 network_create.go:284] running [docker network inspect default-k8s-diff-port-046175] to gather additional debugging logs...
	I1122 00:32:36.735141  261434 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-046175
	W1122 00:32:36.750941  261434 cli_runner.go:211] docker network inspect default-k8s-diff-port-046175 returned with exit code 1
	I1122 00:32:36.750965  261434 network_create.go:287] error running [docker network inspect default-k8s-diff-port-046175]: docker network inspect default-k8s-diff-port-046175: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-046175 not found
	I1122 00:32:36.750980  261434 network_create.go:289] output of [docker network inspect default-k8s-diff-port-046175]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-046175 not found
	
	** /stderr **
	I1122 00:32:36.751156  261434 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:32:36.769452  261434 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:32:36.770339  261434 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:32:36.771280  261434 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:32:36.772001  261434 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-31079b5ab75b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:64:bf:b9:3e:b5} reservation:<nil>}
	I1122 00:32:36.773011  261434 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e479b0}
	I1122 00:32:36.773037  261434 network_create.go:124] attempt to create docker network default-k8s-diff-port-046175 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:32:36.773112  261434 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 default-k8s-diff-port-046175
	I1122 00:32:36.819578  261434 network_create.go:108] docker network default-k8s-diff-port-046175 192.168.85.0/24 created
	I1122 00:32:36.819613  261434 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-046175" container
	I1122 00:32:36.819677  261434 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:32:36.838474  261434 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-046175 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:32:36.855384  261434 oci.go:103] Successfully created a docker volume default-k8s-diff-port-046175
	I1122 00:32:36.855448  261434 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-046175-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --entrypoint /usr/bin/test -v default-k8s-diff-port-046175:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:32:37.230374  261434 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-046175
	I1122 00:32:37.230445  261434 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:37.230460  261434 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:32:37.230531  261434 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-046175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:32:39.997933  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:41.998569  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:41.586856  261434 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-046175:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.356272502s)
	I1122 00:32:41.586884  261434 kic.go:203] duration metric: took 4.35642215s to extract preloaded images to volume ...
	W1122 00:32:41.586956  261434 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:32:41.586986  261434 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:32:41.587021  261434 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:32:41.639445  261434 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-046175 --name default-k8s-diff-port-046175 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-046175 --network default-k8s-diff-port-046175 --ip 192.168.85.2 --volume default-k8s-diff-port-046175:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:32:41.930447  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Running}}
	I1122 00:32:41.949044  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:41.968158  261434 cli_runner.go:164] Run: docker exec default-k8s-diff-port-046175 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:32:42.016317  261434 oci.go:144] the created container "default-k8s-diff-port-046175" has a running status.
	I1122 00:32:42.016344  261434 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa...
	I1122 00:32:42.060753  261434 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:32:42.088508  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:42.108038  261434 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:32:42.108071  261434 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-046175 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:32:42.147297  261434 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:32:42.171019  261434 machine.go:94] provisionDockerMachine start ...
	I1122 00:32:42.171152  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:42.189403  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:42.189689  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:42.189703  261434 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:32:42.190364  261434 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59476->127.0.0.1:33078: read: connection reset by peer
	I1122 00:32:45.312424  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-046175
	
	I1122 00:32:45.312451  261434 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-046175"
	I1122 00:32:45.312520  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.330094  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.330332  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.330346  261434 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-046175 && echo "default-k8s-diff-port-046175" | sudo tee /etc/hostname
	I1122 00:32:45.458700  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-046175
	
	I1122 00:32:45.458789  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.476551  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.476883  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.476918  261434 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-046175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-046175/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-046175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:32:45.596020  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:32:45.596047  261434 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:32:45.596102  261434 ubuntu.go:190] setting up certificates
	I1122 00:32:45.596117  261434 provision.go:84] configureAuth start
	I1122 00:32:45.596196  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:45.613935  261434 provision.go:143] copyHostCerts
	I1122 00:32:45.613993  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:32:45.614007  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:32:45.614104  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:32:45.614215  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:32:45.614227  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:32:45.614271  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:32:45.614356  261434 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:32:45.614365  261434 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:32:45.614403  261434 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:32:45.614477  261434 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-046175 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-046175 localhost minikube]
	I1122 00:32:45.748479  261434 provision.go:177] copyRemoteCerts
	I1122 00:32:45.748536  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:32:45.748585  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.765839  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:45.855090  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:32:45.873847  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:32:45.891430  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:32:45.907811  261434 provision.go:87] duration metric: took 311.67862ms to configureAuth
	I1122 00:32:45.907834  261434 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:32:45.908001  261434 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:45.908133  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:45.924839  261434 main.go:143] libmachine: Using SSH client type: native
	I1122 00:32:45.925136  261434 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1122 00:32:45.925162  261434 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:32:46.199313  261434 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:32:46.199356  261434 machine.go:97] duration metric: took 4.028299103s to provisionDockerMachine
	I1122 00:32:46.199370  261434 client.go:176] duration metric: took 9.483172544s to LocalClient.Create
	I1122 00:32:46.199385  261434 start.go:167] duration metric: took 9.483222923s to libmachine.API.Create "default-k8s-diff-port-046175"
	I1122 00:32:46.199398  261434 start.go:293] postStartSetup for "default-k8s-diff-port-046175" (driver="docker")
	I1122 00:32:46.199415  261434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:32:46.199492  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:32:46.199546  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.216852  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.307287  261434 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:32:46.311233  261434 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:32:46.311263  261434 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:32:46.311274  261434 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:32:46.311370  261434 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:32:46.311486  261434 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:32:46.311641  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:32:46.319687  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:32:46.340280  261434 start.go:296] duration metric: took 140.866097ms for postStartSetup
	I1122 00:32:46.340598  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:46.360307  261434 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:32:46.360553  261434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:32:46.360622  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.379149  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.470003  261434 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:32:46.474659  261434 start.go:128] duration metric: took 9.760123533s to createHost
	I1122 00:32:46.474684  261434 start.go:83] releasing machines lock for "default-k8s-diff-port-046175", held for 9.760221202s
	I1122 00:32:46.474746  261434 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-046175
	I1122 00:32:46.492719  261434 ssh_runner.go:195] Run: cat /version.json
	I1122 00:32:46.492806  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.492834  261434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:32:46.492903  261434 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:32:46.513217  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.514217  261434 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:32:46.675141  261434 ssh_runner.go:195] Run: systemctl --version
	I1122 00:32:46.681111  261434 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:32:46.715248  261434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:32:46.719678  261434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:32:46.719736  261434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:32:46.744248  261434 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:32:46.744267  261434 start.go:496] detecting cgroup driver to use...
	I1122 00:32:46.744299  261434 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:32:46.744342  261434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:32:46.760940  261434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:32:46.773332  261434 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:32:46.773392  261434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:32:46.791047  261434 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:32:46.809232  261434 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:32:46.893087  261434 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:32:46.978448  261434 docker.go:234] disabling docker service ...
	I1122 00:32:46.978518  261434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:32:46.995633  261434 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:32:47.007691  261434 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:32:47.100171  261434 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:32:47.180424  261434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:32:47.191743  261434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:32:47.205675  261434 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:32:47.205726  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.215825  261434 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:32:47.215888  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.225347  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.233675  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.242879  261434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:32:47.250871  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.260349  261434 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.273729  261434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:32:47.282160  261434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:32:47.289044  261434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:32:47.296035  261434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:32:47.374523  261434 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:32:47.504290  261434 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:32:47.504350  261434 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:32:47.508470  261434 start.go:564] Will wait 60s for crictl version
	I1122 00:32:47.508537  261434 ssh_runner.go:195] Run: which crictl
	I1122 00:32:47.511925  261434 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:32:47.535534  261434 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:32:47.535612  261434 ssh_runner.go:195] Run: crio --version
	I1122 00:32:47.561684  261434 ssh_runner.go:195] Run: crio --version
	I1122 00:32:47.589874  261434 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:32:44.497451  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	W1122 00:32:46.497968  250396 node_ready.go:57] node "embed-certs-084979" has "Ready":"False" status (will retry)
	I1122 00:32:47.590995  261434 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-046175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:32:47.608971  261434 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:32:47.612728  261434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:32:47.622377  261434 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:32:47.622482  261434 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:47.622522  261434 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:32:47.650654  261434 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:32:47.650673  261434 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:32:47.650721  261434 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:32:47.673813  261434 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:32:47.673831  261434 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:32:47.673838  261434 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1122 00:32:47.673916  261434 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-046175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:32:47.673970  261434 ssh_runner.go:195] Run: crio config
	I1122 00:32:47.717875  261434 cni.go:84] Creating CNI manager for ""
	I1122 00:32:47.717895  261434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:47.717914  261434 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:32:47.717943  261434 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-046175 NodeName:default-k8s-diff-port-046175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:32:47.718098  261434 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-046175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:32:47.718170  261434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:32:47.725729  261434 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:32:47.725783  261434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:32:47.732897  261434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1122 00:32:47.744599  261434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:32:47.758606  261434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1122 00:32:47.770256  261434 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:32:47.773771  261434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:32:47.783672  261434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:32:47.876503  261434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:32:47.907210  261434 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175 for IP: 192.168.85.2
	I1122 00:32:47.907234  261434 certs.go:195] generating shared ca certs ...
	I1122 00:32:47.907255  261434 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:47.907424  261434 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:32:47.907479  261434 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:32:47.907499  261434 certs.go:257] generating profile certs ...
	I1122 00:32:47.907574  261434 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.key
	I1122 00:32:47.907598  261434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.crt with IP's: []
	I1122 00:32:48.098497  261434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.crt ...
	I1122 00:32:48.098522  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.crt: {Name:mke20aafeea2ad5a90823cbb83359ca833cd1de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.098667  261434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.key ...
	I1122 00:32:48.098679  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/client.key: {Name:mk5a965caa900ee77078d07f07b32a915175f825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.098756  261434 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key.6f53fefb
	I1122 00:32:48.098771  261434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt.6f53fefb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:32:48.117805  261434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt.6f53fefb ...
	I1122 00:32:48.117828  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt.6f53fefb: {Name:mkbf43904ff6f4c61b4088308310cc9b391b82cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.117958  261434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key.6f53fefb ...
	I1122 00:32:48.117972  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key.6f53fefb: {Name:mka01af649a2bab7bc032e10d17499edfc0bc1ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.118049  261434 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt.6f53fefb -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt
	I1122 00:32:48.118132  261434 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key.6f53fefb -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key
	I1122 00:32:48.118217  261434 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.key
	I1122 00:32:48.118245  261434 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.crt with IP's: []
	I1122 00:32:48.150424  261434 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.crt ...
	I1122 00:32:48.150446  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.crt: {Name:mk7f5481cfe73c9867a876af5a079a7b0cd7f46d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.150583  261434 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.key ...
	I1122 00:32:48.150596  261434 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.key: {Name:mk17f303706d4710dfe07dae4311a7177ab24672 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:48.150758  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:32:48.150793  261434 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:32:48.150803  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:32:48.150827  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:32:48.150849  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:32:48.150872  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:32:48.150916  261434 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:32:48.151576  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:32:48.170357  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:32:48.187444  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:32:48.204693  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:32:48.220510  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:32:48.236632  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:32:48.253309  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:32:48.269941  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:32:48.290164  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:32:48.312691  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:32:48.330637  261434 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:32:48.347699  261434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:32:48.360125  261434 ssh_runner.go:195] Run: openssl version
	I1122 00:32:48.366573  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:32:48.374665  261434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:32:48.378182  261434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:32:48.378220  261434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:32:48.415386  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:32:48.423998  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:32:48.432434  261434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:32:48.436088  261434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:32:48.436128  261434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:32:48.481669  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:32:48.491531  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:32:48.501446  261434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:32:48.505862  261434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:32:48.505915  261434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:32:48.541445  261434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:32:48.550038  261434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:32:48.553546  261434 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:32:48.553592  261434 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:32:48.553663  261434 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:32:48.553719  261434 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:32:48.579851  261434 cri.go:89] found id: ""
	I1122 00:32:48.579907  261434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:32:48.587362  261434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:32:48.594638  261434 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:32:48.594692  261434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:32:48.601943  261434 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:32:48.601956  261434 kubeadm.go:158] found existing configuration files:
	
	I1122 00:32:48.601989  261434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1122 00:32:48.609462  261434 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:32:48.609504  261434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:32:48.616723  261434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1122 00:32:48.624030  261434 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:32:48.624088  261434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:32:48.631374  261434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1122 00:32:48.639600  261434 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:32:48.639646  261434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:32:48.647448  261434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1122 00:32:48.655595  261434 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:32:48.655654  261434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:32:48.663915  261434 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:32:48.708186  261434 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:32:48.708261  261434 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:32:48.728697  261434 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:32:48.728775  261434 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:32:48.728825  261434 kubeadm.go:319] OS: Linux
	I1122 00:32:48.728886  261434 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:32:48.728948  261434 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:32:48.729009  261434 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:32:48.729095  261434 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:32:48.729155  261434 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:32:48.729215  261434 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:32:48.729283  261434 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:32:48.729339  261434 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:32:48.796856  261434 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:32:48.797034  261434 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:32:48.797193  261434 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:32:48.807147  261434 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:32:45.575937  218533 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.057694005s)
	W1122 00:32:45.575986  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1122 00:32:48.077113  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.817146075Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.820982272Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:32:11 no-preload-983546 crio[568]: time="2025-11-22T00:32:11.821004735Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.985656077Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8005889d-c4ae-4e7b-ad26-d6f3d525f3b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.98819799Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=369507f9-4ef0-4df7-922a-db08157e2fbe name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.991398249Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=34db75e7-a582-4f8e-9b58-86905a64afc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.991543137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:20 no-preload-983546 crio[568]: time="2025-11-22T00:32:20.999832969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.000301596Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.024683554Z" level=info msg="Created container 44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=34db75e7-a582-4f8e-9b58-86905a64afc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.025214978Z" level=info msg="Starting container: 44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d" id=39536217-c916-4dd3-acfa-3d3ee896e98a name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.027011158Z" level=info msg="Started container" PID=1756 containerID=44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper id=39536217-c916-4dd3-acfa-3d3ee896e98a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3315e63eff016597fd30c0a2ca1bd94bf8b1d36123649ed4305d3928066be56
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.093208481Z" level=info msg="Removing container: fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492" id=2cbda79c-7b52-432e-a46e-a3bf4ee4494c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:21 no-preload-983546 crio[568]: time="2025-11-22T00:32:21.102336427Z" level=info msg="Removed container fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc/dashboard-metrics-scraper" id=2cbda79c-7b52-432e-a46e-a3bf4ee4494c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.119987382Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e28bee9f-0d0b-4d1e-ac28-d8b0ace47dbe name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.120895377Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=00240263-c42d-4f12-9a2c-61e5745b3c0c name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.12201947Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1089f06e-2aad-4abe-b05c-0797f50801cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.122176917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127401898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127588488Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/693fb75a5d6b13bb5d394ad2db2386c50a71f77028983203349861908fe8b047/merged/etc/passwd: no such file or directory"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.127621476Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/693fb75a5d6b13bb5d394ad2db2386c50a71f77028983203349861908fe8b047/merged/etc/group: no such file or directory"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.12789732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.157278433Z" level=info msg="Created container 8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b: kube-system/storage-provisioner/storage-provisioner" id=1089f06e-2aad-4abe-b05c-0797f50801cf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.15778741Z" level=info msg="Starting container: 8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b" id=990d0b93-4751-4977-98e9-4dab6975c0f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:32 no-preload-983546 crio[568]: time="2025-11-22T00:32:32.159988952Z" level=info msg="Started container" PID=1773 containerID=8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b description=kube-system/storage-provisioner/storage-provisioner id=990d0b93-4751-4977-98e9-4dab6975c0f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90785d50afe9440a8628659601748f3672892a30c63659f8278d4a2b5e597769
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8c92a15018c69       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   90785d50afe94       storage-provisioner                          kube-system
	44f5b1a4bec93       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   c3315e63eff01       dashboard-metrics-scraper-6ffb444bf9-98spc   kubernetes-dashboard
	0065675f0b413       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   ae386be23023b       kubernetes-dashboard-855c9754f9-fb2ss        kubernetes-dashboard
	76af8b2fe949b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   74277d2daaea7       coredns-66bc5c9577-4psr2                     kube-system
	8e69bcc2ba825       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   e1a2b790e52d8       busybox                                      default
	f501552abfbef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   4c649c734c20f       kube-proxy-gnlfp                             kube-system
	2365588cd931c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   90785d50afe94       storage-provisioner                          kube-system
	95fad35fc6518       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   00788dafb2a97       kindnet-rpr2g                                kube-system
	15ff8ca6c3bd3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   cafc401b5d9ea       etcd-no-preload-983546                       kube-system
	2395f0fc0ddc2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   d0ac9fbab2845       kube-apiserver-no-preload-983546             kube-system
	2e71abd401006       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   8c355ae6a5bcb       kube-controller-manager-no-preload-983546    kube-system
	748b8383a47b0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   ee44ee75390a7       kube-scheduler-no-preload-983546             kube-system
	
	
	==> coredns [76af8b2fe949b0dc8efe070c8a594ad008ad64c67858fd8f0ae558a32f45fe76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46768 - 11999 "HINFO IN 6219612061679094651.3936230835065146805. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091587976s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-983546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-983546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-983546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_31_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:31:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-983546
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:32:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:30:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:32:31 +0000   Sat, 22 Nov 2025 00:31:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-983546
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                1d18d6ff-8b0a-4769-8dee-cdd1e29786a3
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-4psr2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-no-preload-983546                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-rpr2g                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-983546              250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-no-preload-983546     200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-gnlfp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-983546              100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-98spc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fb2ss         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node no-preload-983546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node no-preload-983546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node no-preload-983546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           103s               node-controller  Node no-preload-983546 event: Registered Node no-preload-983546 in Controller
	  Normal  NodeReady                89s                kubelet          Node no-preload-983546 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 54s)  kubelet          Node no-preload-983546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 54s)  kubelet          Node no-preload-983546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 54s)  kubelet          Node no-preload-983546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node no-preload-983546 event: Registered Node no-preload-983546 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [15ff8ca6c3bd36d322a741daeef18c6b81980f6be123f1eccf822d0b1ce32e19] <==
	{"level":"warn","ts":"2025-11-22T00:31:59.551566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.553649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.565502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.575711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.592128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.608666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.616169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.631872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.643140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.662297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.672862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.681600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.690860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.701986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.711853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.724580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.732467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.743857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.757182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.766130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.772240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.780787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.861671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51596","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:32:41.210377Z","caller":"traceutil/trace.go:172","msg":"trace[111803095] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"110.776313ms","start":"2025-11-22T00:32:41.099583Z","end":"2025-11-22T00:32:41.210360Z","steps":["trace[111803095] 'process raft request'  (duration: 110.738735ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:41.210463Z","caller":"traceutil/trace.go:172","msg":"trace[1843632463] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"166.94869ms","start":"2025-11-22T00:32:41.043492Z","end":"2025-11-22T00:32:41.210441Z","steps":["trace[1843632463] 'process raft request'  (duration: 101.968494ms)","trace[1843632463] 'compare'  (duration: 64.75184ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:32:50 up  1:15,  0 user,  load average: 2.77, 2.94, 1.85
	Linux no-preload-983546 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [95fad35fc65181b4daf9312acf2f79014f290dccfe9528995d782fe1fbb107aa] <==
	I1122 00:32:01.590455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:32:01.590702       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:32:01.590854       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:32:01.590881       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:32:01.590904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:32:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:32:01.797398       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:32:01.797552       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:32:01.797591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:32:01.799098       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:32:02.098184       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:32:02.098221       1 metrics.go:72] Registering metrics
	I1122 00:32:02.098271       1 controller.go:711] "Syncing nftables rules"
	I1122 00:32:11.797815       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:11.797868       1 main.go:301] handling current node
	I1122 00:32:21.797785       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:21.797839       1 main.go:301] handling current node
	I1122 00:32:31.797121       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:31.797156       1 main.go:301] handling current node
	I1122 00:32:41.804196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:32:41.804234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2395f0fc0ddc2558b662ecf094a2c9137111096336ce24f63f4bb978edacc84d] <==
	I1122 00:32:00.536693       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:32:00.536525       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:32:00.536864       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:32:00.537399       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:32:00.540158       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:32:00.540191       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:32:00.540237       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:32:00.540252       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:32:00.537379       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:32:00.550485       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:32:00.557127       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:32:00.564677       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:32:00.564744       1 policy_source.go:240] refreshing policies
	I1122 00:32:00.576262       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:32:00.929614       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:32:00.961157       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:32:00.981280       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:32:01.013102       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:32:01.022372       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:32:01.068079       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.41.115"}
	I1122 00:32:01.079791       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.190.148"}
	I1122 00:32:01.439985       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:32:03.949752       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:32:04.146775       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:32:04.495460       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2e71abd4010063bf4aff10634290d6163b0d784274776fb107399539e1af2d22] <==
	I1122 00:32:03.869876       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:32:03.872111       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:32:03.883898       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:32:03.885282       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:32:03.886398       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:32:03.888615       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:32:03.891859       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:32:03.893038       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:32:03.893356       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:32:03.894409       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:32:03.894434       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:32:03.894647       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:32:03.898566       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:32:03.906804       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:32:03.906873       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:32:03.906929       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:32:03.906936       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:32:03.906943       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:32:03.909083       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:32:03.909181       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:32:03.909250       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-983546"
	I1122 00:32:03.909322       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:32:03.911453       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:32:03.912704       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:32:03.912791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f501552abfbef5c6045420811113cd307ddce87f32aa846610d14b395f7e0108] <==
	I1122 00:32:01.410125       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:32:01.480314       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:32:01.580578       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:32:01.580616       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:32:01.580711       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:32:01.626084       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:32:01.626471       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:32:01.638483       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:32:01.639395       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:32:01.639657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:32:01.644511       1 config.go:200] "Starting service config controller"
	I1122 00:32:01.644604       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:32:01.647311       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:32:01.648969       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:32:01.650161       1 config.go:309] "Starting node config controller"
	I1122 00:32:01.650178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:32:01.650187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:32:01.650165       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:32:01.650209       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:32:01.746175       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:32:01.749458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:32:01.751319       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [748b8383a47b0f40485edc4c674299b4dcb993eccaae00337a17f00f55de0076] <==
	I1122 00:31:58.572870       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:32:00.480426       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:32:00.480462       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:32:00.480475       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:32:00.480485       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:32:00.512926       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:32:00.513091       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:32:00.516835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.516877       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.517383       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:32:00.517476       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:32:00.617049       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401255     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtg9\" (UniqueName: \"kubernetes.io/projected/4c34d579-4516-4496-b0a3-b9777443a826-kube-api-access-hxtg9\") pod \"dashboard-metrics-scraper-6ffb444bf9-98spc\" (UID: \"4c34d579-4516-4496-b0a3-b9777443a826\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc"
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401276     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4c34d579-4516-4496-b0a3-b9777443a826-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-98spc\" (UID: \"4c34d579-4516-4496-b0a3-b9777443a826\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc"
	Nov 22 00:32:04 no-preload-983546 kubelet[720]: I1122 00:32:04.401294     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c903c3de-d57d-4f5d-9a37-79b8cd83c15c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fb2ss\" (UID: \"c903c3de-d57d-4f5d-9a37-79b8cd83c15c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fb2ss"
	Nov 22 00:32:07 no-preload-983546 kubelet[720]: I1122 00:32:07.051428     720 scope.go:117] "RemoveContainer" containerID="dfbff8208e29d897d0219cbadb2bd4f0849cc5732cc2858514f0c552e3fa1f63"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: I1122 00:32:08.055728     720 scope.go:117] "RemoveContainer" containerID="dfbff8208e29d897d0219cbadb2bd4f0849cc5732cc2858514f0c552e3fa1f63"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: I1122 00:32:08.055886     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:08 no-preload-983546 kubelet[720]: E1122 00:32:08.056096     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:09 no-preload-983546 kubelet[720]: I1122 00:32:09.061259     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:09 no-preload-983546 kubelet[720]: E1122 00:32:09.061456     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:10 no-preload-983546 kubelet[720]: I1122 00:32:10.064375     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:10 no-preload-983546 kubelet[720]: E1122 00:32:10.064607     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:11 no-preload-983546 kubelet[720]: I1122 00:32:11.095530     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fb2ss" podStartSLOduration=1.490162185 podStartE2EDuration="7.095511532s" podCreationTimestamp="2025-11-22 00:32:04 +0000 UTC" firstStartedPulling="2025-11-22 00:32:04.647808526 +0000 UTC m=+7.770520558" lastFinishedPulling="2025-11-22 00:32:10.253157873 +0000 UTC m=+13.375869905" observedRunningTime="2025-11-22 00:32:11.095305975 +0000 UTC m=+14.218018030" watchObservedRunningTime="2025-11-22 00:32:11.095511532 +0000 UTC m=+14.218223584"
	Nov 22 00:32:20 no-preload-983546 kubelet[720]: I1122 00:32:20.985198     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: I1122 00:32:21.091964     720 scope.go:117] "RemoveContainer" containerID="fe72961df99717a4cc67ae6552824d80d27886ee90cdb94f6550c941d15ba492"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: I1122 00:32:21.092212     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:21 no-preload-983546 kubelet[720]: E1122 00:32:21.092399     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:28 no-preload-983546 kubelet[720]: I1122 00:32:28.306137     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:28 no-preload-983546 kubelet[720]: E1122 00:32:28.306303     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:32 no-preload-983546 kubelet[720]: I1122 00:32:32.119574     720 scope.go:117] "RemoveContainer" containerID="2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d"
	Nov 22 00:32:40 no-preload-983546 kubelet[720]: I1122 00:32:40.985207     720 scope.go:117] "RemoveContainer" containerID="44f5b1a4bec9348a3b57d7acfabe68de74e478695826968e3d4b17db6d1c654d"
	Nov 22 00:32:40 no-preload-983546 kubelet[720]: E1122 00:32:40.985374     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-98spc_kubernetes-dashboard(4c34d579-4516-4496-b0a3-b9777443a826)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-98spc" podUID="4c34d579-4516-4496-b0a3-b9777443a826"
	Nov 22 00:32:46 no-preload-983546 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:32:46 no-preload-983546 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:32:46 no-preload-983546 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:32:46 no-preload-983546 systemd[1]: kubelet.service: Consumed 1.455s CPU time.
	
	
	==> kubernetes-dashboard [0065675f0b4131b7a62e9267f42c240303e8fb7718d81fed3827230d2167933b] <==
	2025/11/22 00:32:10 Using namespace: kubernetes-dashboard
	2025/11/22 00:32:10 Using in-cluster config to connect to apiserver
	2025/11/22 00:32:10 Using secret token for csrf signing
	2025/11/22 00:32:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:32:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:32:10 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:32:10 Generating JWE encryption key
	2025/11/22 00:32:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:32:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:32:10 Initializing JWE encryption key from synchronized object
	2025/11/22 00:32:10 Creating in-cluster Sidecar client
	2025/11/22 00:32:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:10 Serving insecurely on HTTP port: 9090
	2025/11/22 00:32:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:32:10 Starting overwatch
	
	
	==> storage-provisioner [2365588cd931c5a0a64bdb393d00bff477ceb2e6f2d00673d57d9ebfcf40c30d] <==
	I1122 00:32:01.362162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:32:31.365426       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8c92a15018c69e9b3f0bea0b14a42cab51aa67d3463c26f8e3d6595b3b7a1e9b] <==
	I1122 00:32:32.173797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:32:32.181787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:32:32.181834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:32:32.183930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:35.638687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:39.899374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:43.497632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:46.552247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:49.573704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:49.577472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:32:49.577624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:32:49.577681       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6198fa6c-b306-4e68-b0dd-7835a65484f8", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-983546_5dffe9e8-2982-411e-8087-e185fe713471 became leader
	I1122 00:32:49.577751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-983546_5dffe9e8-2982-411e-8087-e185fe713471!
	W1122 00:32:49.579460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:49.582369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:32:49.677982       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-983546_5dffe9e8-2982-411e-8087-e185fe713471!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983546 -n no-preload-983546
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-983546 -n no-preload-983546: exit status 2 (335.122573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-983546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (239.50382ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-084979 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-084979 describe deploy/metrics-server -n kube-system: exit status 1 (55.166423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-084979 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-084979
helpers_test.go:243: (dbg) docker inspect embed-certs-084979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	        "Created": "2025-11-22T00:31:48.222415176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:31:48.257109837Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hosts",
	        "LogPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58-json.log",
	        "Name": "/embed-certs-084979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-084979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-084979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	                "LowerDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-084979",
	                "Source": "/var/lib/docker/volumes/embed-certs-084979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-084979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-084979",
	                "name.minikube.sigs.k8s.io": "embed-certs-084979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9a9ba816b072da1cdb850fd418afdd3b98e8b9f2e3eeab2c3605a21c09ed3164",
	            "SandboxKey": "/var/run/docker/netns/9a9ba816b072",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-084979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d41c17c02e28b2753b6d078dd9a412b682778fc89e095be2adad8a79a3a99d8",
	                    "EndpointID": "29fceb38c1b61455090197d3206e9acaaca7e5b1969ea325194ff646b0423cbd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ce:4c:e5:35:e9:98",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-084979",
	                        "e8d02ad472d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-084979 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p NoKubernetes-953061                                                                                                                                                                                                                        │ NoKubernetes-953061          │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:30 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:30 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-377321 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ stop    │ -p old-k8s-version-377321 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:32:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:32:54.850537  266241 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:32:54.850690  266241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:54.850702  266241 out.go:374] Setting ErrFile to fd 2...
	I1122 00:32:54.850708  266241 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:32:54.851018  266241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:32:54.851659  266241 out.go:368] Setting JSON to false
	I1122 00:32:54.853003  266241 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4524,"bootTime":1763767051,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:32:54.853070  266241 start.go:143] virtualization: kvm guest
	I1122 00:32:54.854895  266241 out.go:179] * [newest-cni-531189] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:32:54.856231  266241 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:32:54.856210  266241 notify.go:221] Checking for updates...
	I1122 00:32:54.857225  266241 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:32:54.858243  266241 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:32:54.859287  266241 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:32:54.860553  266241 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:32:54.861631  266241 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:32:54.863188  266241 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:54.863344  266241 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:54.863466  266241 config.go:182] Loaded profile config "kubernetes-upgrade-619859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:32:54.863651  266241 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:32:54.891913  266241 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:32:54.892043  266241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:54.957225  266241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:32:54.946805052 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:54.957375  266241 docker.go:319] overlay module found
	I1122 00:32:54.959004  266241 out.go:179] * Using the docker driver based on user configuration
	I1122 00:32:54.960235  266241 start.go:309] selected driver: docker
	I1122 00:32:54.960253  266241 start.go:930] validating driver "docker" against <nil>
	I1122 00:32:54.960264  266241 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:32:54.960807  266241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:32:55.031236  266241 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:32:55.020637542 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:32:55.031479  266241 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1122 00:32:55.031514  266241 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1122 00:32:55.031834  266241 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:32:55.033603  266241 out.go:179] * Using Docker driver with root privileges
	I1122 00:32:55.034554  266241 cni.go:84] Creating CNI manager for ""
	I1122 00:32:55.034623  266241 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:55.034638  266241 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:32:55.034736  266241 start.go:353] cluster config:
	{Name:newest-cni-531189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531189 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:32:55.035774  266241 out.go:179] * Starting "newest-cni-531189" primary control-plane node in "newest-cni-531189" cluster
	I1122 00:32:55.036666  266241 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:32:55.037705  266241 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:32:55.038660  266241 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:55.038702  266241 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:32:55.038716  266241 cache.go:65] Caching tarball of preloaded images
	I1122 00:32:55.038751  266241 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:32:55.038813  266241 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:32:55.038821  266241 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:32:55.038913  266241 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/newest-cni-531189/config.json ...
	I1122 00:32:55.038938  266241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/newest-cni-531189/config.json: {Name:mke90577cb92e2245b8fb8f161ede322065e709b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:32:55.062952  266241 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:32:55.062972  266241 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:32:55.062991  266241 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:32:55.063026  266241 start.go:360] acquireMachinesLock for newest-cni-531189: {Name:mkf5f8834fdc49155b7d0cd45743e22eb9e231dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:32:55.063151  266241 start.go:364] duration metric: took 103.718µs to acquireMachinesLock for "newest-cni-531189"
	I1122 00:32:55.063185  266241 start.go:93] Provisioning new machine with config: &{Name:newest-cni-531189 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531189 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:32:55.063289  266241 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:32:52.582131  261434 out.go:252]   - Booting up control plane ...
	I1122 00:32:52.582207  261434 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:32:52.582276  261434 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:32:52.582993  261434 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:32:52.596727  261434 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:32:52.596910  261434 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:32:52.603199  261434 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:32:52.603415  261434 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:32:52.603487  261434 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:32:52.694919  261434 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:32:52.695101  261434 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:32:53.197241  261434 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.348535ms
	I1122 00:32:53.200569  261434 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:32:53.200734  261434 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1122 00:32:53.200905  261434 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:32:53.201030  261434 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:32:55.449834  261434 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.249102607s
	I1122 00:32:55.825728  261434 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.625109524s
	I1122 00:32:57.702687  261434 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.502080957s
	I1122 00:32:57.716534  261434 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:32:57.730467  261434 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:32:57.742191  261434 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:32:57.742479  261434 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-046175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:32:57.751841  261434 kubeadm.go:319] [bootstrap-token] Using token: vic9vk.zgggyhi1xzfsopcw
	I1122 00:32:54.571629  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:54.571662  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:54.641466  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:57.142027  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:32:57.142519  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:32:57.142586  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:32:57.142650  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:32:57.176190  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:57.176218  218533 cri.go:89] found id: ""
	I1122 00:32:57.176230  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:32:57.176295  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:57.181390  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:32:57.181457  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:32:57.213067  218533 cri.go:89] found id: ""
	I1122 00:32:57.213096  218533 logs.go:282] 0 containers: []
	W1122 00:32:57.213106  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:32:57.213114  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:32:57.213178  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:32:57.245687  218533 cri.go:89] found id: ""
	I1122 00:32:57.245717  218533 logs.go:282] 0 containers: []
	W1122 00:32:57.245735  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:32:57.245744  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:32:57.245810  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:32:57.280676  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:57.280704  218533 cri.go:89] found id: ""
	I1122 00:32:57.280714  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:32:57.280781  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:57.285615  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:32:57.285683  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:32:57.315369  218533 cri.go:89] found id: ""
	I1122 00:32:57.315400  218533 logs.go:282] 0 containers: []
	W1122 00:32:57.315411  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:32:57.315420  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:32:57.315480  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:32:57.348272  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:32:57.348299  218533 cri.go:89] found id: ""
	I1122 00:32:57.348311  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:32:57.348369  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:32:57.352799  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:32:57.352868  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:32:57.384991  218533 cri.go:89] found id: ""
	I1122 00:32:57.385015  218533 logs.go:282] 0 containers: []
	W1122 00:32:57.385023  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:32:57.385030  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:32:57.385101  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:32:57.414133  218533 cri.go:89] found id: ""
	I1122 00:32:57.414165  218533 logs.go:282] 0 containers: []
	W1122 00:32:57.414178  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:32:57.414191  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:32:57.414205  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:32:57.500385  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:32:57.500416  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:32:57.515503  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:32:57.515530  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:32:57.573018  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:32:57.573045  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:32:57.573074  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:32:57.606496  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:32:57.606530  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:32:57.662298  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:32:57.662328  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:32:57.692871  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:32:57.692898  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:32:57.760193  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:32:57.760242  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:32:55.065567  266241 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:32:55.065799  266241 start.go:159] libmachine.API.Create for "newest-cni-531189" (driver="docker")
	I1122 00:32:55.065827  266241 client.go:173] LocalClient.Create starting
	I1122 00:32:55.065904  266241 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:32:55.065945  266241 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:55.065972  266241 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:55.066027  266241 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:32:55.066078  266241 main.go:143] libmachine: Decoding PEM data...
	I1122 00:32:55.066096  266241 main.go:143] libmachine: Parsing certificate...
	I1122 00:32:55.066549  266241 cli_runner.go:164] Run: docker network inspect newest-cni-531189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:32:55.086482  266241 cli_runner.go:211] docker network inspect newest-cni-531189 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:32:55.086545  266241 network_create.go:284] running [docker network inspect newest-cni-531189] to gather additional debugging logs...
	I1122 00:32:55.086569  266241 cli_runner.go:164] Run: docker network inspect newest-cni-531189
	W1122 00:32:55.104706  266241 cli_runner.go:211] docker network inspect newest-cni-531189 returned with exit code 1
	I1122 00:32:55.104736  266241 network_create.go:287] error running [docker network inspect newest-cni-531189]: docker network inspect newest-cni-531189: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-531189 not found
	I1122 00:32:55.104751  266241 network_create.go:289] output of [docker network inspect newest-cni-531189]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-531189 not found
	
	** /stderr **
	I1122 00:32:55.104949  266241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:32:55.125364  266241 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:32:55.126333  266241 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:32:55.127279  266241 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:32:55.128397  266241 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e08770}
	I1122 00:32:55.128426  266241 network_create.go:124] attempt to create docker network newest-cni-531189 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:32:55.128491  266241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-531189 newest-cni-531189
	I1122 00:32:55.182771  266241 network_create.go:108] docker network newest-cni-531189 192.168.76.0/24 created
	I1122 00:32:55.182807  266241 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-531189" container
	I1122 00:32:55.182873  266241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:32:55.203864  266241 cli_runner.go:164] Run: docker volume create newest-cni-531189 --label name.minikube.sigs.k8s.io=newest-cni-531189 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:32:55.224808  266241 oci.go:103] Successfully created a docker volume newest-cni-531189
	I1122 00:32:55.224884  266241 cli_runner.go:164] Run: docker run --rm --name newest-cni-531189-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531189 --entrypoint /usr/bin/test -v newest-cni-531189:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:32:55.646050  266241 oci.go:107] Successfully prepared a docker volume newest-cni-531189
	I1122 00:32:55.646165  266241 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:32:55.646184  266241 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:32:55.646246  266241 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531189:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:32:57.753426  261434 out.go:252]   - Configuring RBAC rules ...
	I1122 00:32:57.753579  261434 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:32:57.756391  261434 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:32:57.762095  261434 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:32:57.765810  261434 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:32:57.768259  261434 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:32:57.770651  261434 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:32:58.129532  261434 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:32:59.439712  261434 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:32:59.914684  261434 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:32:59.916126  261434 kubeadm.go:319] 
	I1122 00:32:59.916231  261434 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:32:59.916241  261434 kubeadm.go:319] 
	I1122 00:32:59.916376  261434 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:32:59.916396  261434 kubeadm.go:319] 
	I1122 00:32:59.916462  261434 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:32:59.916546  261434 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:32:59.916608  261434 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:32:59.916619  261434 kubeadm.go:319] 
	I1122 00:32:59.916678  261434 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:32:59.916689  261434 kubeadm.go:319] 
	I1122 00:32:59.916744  261434 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:32:59.916754  261434 kubeadm.go:319] 
	I1122 00:32:59.916811  261434 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:32:59.916928  261434 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:32:59.917105  261434 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:32:59.917116  261434 kubeadm.go:319] 
	I1122 00:32:59.917241  261434 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:32:59.917425  261434 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:32:59.917447  261434 kubeadm.go:319] 
	I1122 00:32:59.917543  261434 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token vic9vk.zgggyhi1xzfsopcw \
	I1122 00:32:59.917661  261434 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:32:59.917688  261434 kubeadm.go:319] 	--control-plane 
	I1122 00:32:59.917694  261434 kubeadm.go:319] 
	I1122 00:32:59.917789  261434 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:32:59.917798  261434 kubeadm.go:319] 
	I1122 00:32:59.917886  261434 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token vic9vk.zgggyhi1xzfsopcw \
	I1122 00:32:59.917997  261434 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:32:59.921386  261434 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:32:59.921517  261434 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:32:59.921544  261434 cni.go:84] Creating CNI manager for ""
	I1122 00:32:59.921553  261434 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:32:59.943137  261434 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:32:59.985090  261434 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:32:59.989447  261434 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:32:59.989464  261434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:33:00.002398  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:33:00.282078  261434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:33:00.282205  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-046175 minikube.k8s.io/updated_at=2025_11_22T00_33_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=default-k8s-diff-port-046175 minikube.k8s.io/primary=true
	I1122 00:33:00.282207  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:00.293183  261434 ops.go:34] apiserver oom_adj: -16
	I1122 00:33:00.379248  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:00.879367  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:01.379347  261434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 22 00:32:50 embed-certs-084979 crio[772]: time="2025-11-22T00:32:50.22194703Z" level=info msg="Starting container: 23ae95d4b1ff2289efc16624c7addd8f216d511a546f6688994d95d758c64b97" id=85951d7e-7818-4fd7-b8bf-2d71c6a645c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:50 embed-certs-084979 crio[772]: time="2025-11-22T00:32:50.223961757Z" level=info msg="Started container" PID=1832 containerID=23ae95d4b1ff2289efc16624c7addd8f216d511a546f6688994d95d758c64b97 description=kube-system/coredns-66bc5c9577-jjldt/coredns id=85951d7e-7818-4fd7-b8bf-2d71c6a645c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=744f516bdab551ae4f3a15d408c70bf533b90bba0282d340040d55137b30a554
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.323618576Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1aff264a-ad67-4a03-a17a-8586fee7440d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.323697432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.328742406Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3e1f8810fd78c5187943b6716954d389077878f9bc64e9f9525f13b2ba425fb5 UID:ed303111-f811-473a-89a1-52a608759f93 NetNS:/var/run/netns/fe24097e-380a-4ef2-9942-f777debaa637 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001cc7d8}] Aliases:map[]}"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.328769156Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.337802598Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3e1f8810fd78c5187943b6716954d389077878f9bc64e9f9525f13b2ba425fb5 UID:ed303111-f811-473a-89a1-52a608759f93 NetNS:/var/run/netns/fe24097e-380a-4ef2-9942-f777debaa637 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001cc7d8}] Aliases:map[]}"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.337911737Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.338582628Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.33948826Z" level=info msg="Ran pod sandbox 3e1f8810fd78c5187943b6716954d389077878f9bc64e9f9525f13b2ba425fb5 with infra container: default/busybox/POD" id=1aff264a-ad67-4a03-a17a-8586fee7440d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.340714744Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=55dc5619-c49c-4fc8-a97c-dba59a9b4170 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.340849472Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=55dc5619-c49c-4fc8-a97c-dba59a9b4170 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.340895418Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=55dc5619-c49c-4fc8-a97c-dba59a9b4170 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.341675529Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db7bac94-2f60-47dc-af94-1e43612e231b name=/runtime.v1.ImageService/PullImage
	Nov 22 00:32:53 embed-certs-084979 crio[772]: time="2025-11-22T00:32:53.343158352Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.072015075Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=db7bac94-2f60-47dc-af94-1e43612e231b name=/runtime.v1.ImageService/PullImage
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.072796559Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=23ffb283-89a7-4e7b-9b6d-0a35e2db035d name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.074439741Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=032f5745-f157-4670-bee0-d5a60168a625 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.078665867Z" level=info msg="Creating container: default/busybox/busybox" id=afeed91d-be76-49f2-a5ca-20a3da03b031 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.078764926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.08241811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.0829441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.192417914Z" level=info msg="Created container ab5de34bdc199bbd7b72f67cc35eb255563090e5f7c439c09e80e96f45cb4e86: default/busybox/busybox" id=afeed91d-be76-49f2-a5ca-20a3da03b031 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.19330032Z" level=info msg="Starting container: ab5de34bdc199bbd7b72f67cc35eb255563090e5f7c439c09e80e96f45cb4e86" id=12186255-944d-4ea3-a0ff-ba9833c41501 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:32:54 embed-certs-084979 crio[772]: time="2025-11-22T00:32:54.195325255Z" level=info msg="Started container" PID=1907 containerID=ab5de34bdc199bbd7b72f67cc35eb255563090e5f7c439c09e80e96f45cb4e86 description=default/busybox/busybox id=12186255-944d-4ea3-a0ff-ba9833c41501 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e1f8810fd78c5187943b6716954d389077878f9bc64e9f9525f13b2ba425fb5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	ab5de34bdc199       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   3e1f8810fd78c       busybox                                      default
	23ae95d4b1ff2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   744f516bdab55       coredns-66bc5c9577-jjldt                     kube-system
	b172a003734d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   838d85be30ad6       storage-provisioner                          kube-system
	afbe651d66156       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   f227d547a6519       kindnet-57bxk                                kube-system
	cd7b4de313034       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      53 seconds ago       Running             kube-proxy                0                   5c1dfef6a77c6       kube-proxy-lsc2k                             kube-system
	e2b4995ba907a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   70c9a6fa9e3d6       kube-scheduler-embed-certs-084979            kube-system
	f9dbb71ecf8e8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   d2b475876b790       kube-apiserver-embed-certs-084979            kube-system
	0db6bc3145467       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   bf8b402b66e47       kube-controller-manager-embed-certs-084979   kube-system
	344e0674370fe       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   4883dec0dc191       etcd-embed-certs-084979                      kube-system
	
	
	==> coredns [23ae95d4b1ff2289efc16624c7addd8f216d511a546f6688994d95d758c64b97] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49456 - 14508 "HINFO IN 9101098010157179497.7577852246629079917. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072031337s
	
	
	==> describe nodes <==
	Name:               embed-certs-084979
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-084979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-084979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_32_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-084979
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:32:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:32:49 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:32:49 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:32:49 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:32:49 +0000   Sat, 22 Nov 2025 00:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-084979
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                c16ffbd2-b440-4b5b-8f37-f7fb083b435c
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-jjldt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     54s
	  kube-system                 etcd-embed-certs-084979                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-57bxk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-embed-certs-084979             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-084979    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-lsc2k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-embed-certs-084979             100m (1%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 53s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node embed-certs-084979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node embed-certs-084979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node embed-certs-084979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node embed-certs-084979 event: Registered Node embed-certs-084979 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-084979 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [344e0674370fe7d1cd7eba8f1a62dbc9089961ac633fbad03eb18e5b992af05f] <==
	{"level":"warn","ts":"2025-11-22T00:31:59.459980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.469501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.478145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.487574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.506833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.517888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.533223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.544867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.559302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.564476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.572351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.580362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.589866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.597687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.607626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.620068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.633952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.642836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.657623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.666228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.675869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:31:59.738925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51480","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:32:03.345959Z","caller":"traceutil/trace.go:172","msg":"trace[1936881135] transaction","detail":"{read_only:false; response_revision:248; number_of_response:1; }","duration":"109.777271ms","start":"2025-11-22T00:32:03.236161Z","end":"2025-11-22T00:32:03.345939Z","steps":["trace[1936881135] 'process raft request'  (duration: 51.552711ms)","trace[1936881135] 'compare'  (duration: 58.129639ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:32:03.345995Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.351216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-084979\" limit:1 ","response":"range_response_count:1 size:4854"}
	{"level":"info","ts":"2025-11-22T00:32:03.346100Z","caller":"traceutil/trace.go:172","msg":"trace[1865290319] range","detail":"{range_begin:/registry/minions/embed-certs-084979; range_end:; response_count:1; response_revision:247; }","duration":"108.476064ms","start":"2025-11-22T00:32:03.237600Z","end":"2025-11-22T00:32:03.346076Z","steps":["trace[1865290319] 'agreement among raft nodes before linearized reading'  (duration: 50.079646ms)","trace[1865290319] 'range keys from in-memory index tree'  (duration: 58.192618ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:33:02 up  1:15,  0 user,  load average: 2.74, 2.93, 1.86
	Linux embed-certs-084979 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afbe651d66156bd7f5b0db1cb95c10eb1a2f6e0e5ce43c283eb484a8da9a4b1c] <==
	I1122 00:32:09.296250       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:32:09.296537       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1122 00:32:09.296690       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:32:09.296712       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:32:09.296738       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:32:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:32:09.502363       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:32:09.502386       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:32:09.502397       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:32:09.502516       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:32:39.503228       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:32:39.503227       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:32:39.503236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:32:39.503252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1122 00:32:41.102490       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:32:41.102528       1 metrics.go:72] Registering metrics
	I1122 00:32:41.102577       1 controller.go:711] "Syncing nftables rules"
	I1122 00:32:49.509409       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:32:49.509456       1 main.go:301] handling current node
	I1122 00:32:59.504018       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:32:59.504075       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f9dbb71ecf8e8f2d90a46b20f367e2687d1cb44dd8643d80499a9d07e2e8caf7] <==
	E1122 00:32:00.479783       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:32:00.517876       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:32:00.524414       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:32:00.524945       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:32:00.558209       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:32:00.558901       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:32:00.636022       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:32:01.299517       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:32:01.304093       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:32:01.304111       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:32:01.945361       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:32:01.988473       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:32:02.104668       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:32:02.112868       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1122 00:32:02.114032       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:32:02.118154       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:32:02.372531       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:32:03.253517       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:32:03.390275       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:32:03.400686       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:32:07.975016       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:32:08.025392       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:32:08.028364       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:32:08.073755       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:33:01.107344       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35214: use of closed network connection
	
	
	==> kube-controller-manager [0db6bc3145467f42f471f0c3308866ebdf4bfdc2caca38f0029a9b98c2a905b1] <==
	I1122 00:32:07.370259       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:32:07.370420       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-084979"
	I1122 00:32:07.370433       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:32:07.370447       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:32:07.370466       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:32:07.370478       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:32:07.370605       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:32:07.370751       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:32:07.371048       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:32:07.371129       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:32:07.371431       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:32:07.371725       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:32:07.372479       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:32:07.372805       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:32:07.372818       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:32:07.374756       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:32:07.378152       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:32:07.378193       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:32:07.378223       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:32:07.378229       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:32:07.378248       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:32:07.382144       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:32:07.395374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:32:07.397756       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-084979" podCIDRs=["10.244.0.0/24"]
	I1122 00:32:52.377286       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cd7b4de313034c4177c816ad39379e936ba1490474fb16a5d062fbf4eb2357f2] <==
	I1122 00:32:09.127010       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:32:09.211958       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:32:09.312934       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:32:09.312968       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1122 00:32:09.313082       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:32:09.337216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:32:09.337391       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:32:09.343774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:32:09.344182       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:32:09.344252       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:32:09.346099       1 config.go:309] "Starting node config controller"
	I1122 00:32:09.346158       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:32:09.347424       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:32:09.347439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:32:09.347526       1 config.go:200] "Starting service config controller"
	I1122 00:32:09.347532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:32:09.347622       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:32:09.347631       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:32:09.447215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:32:09.448433       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:32:09.448478       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:32:09.448492       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e2b4995ba907a7195dd9a7a8c0f9b78eaa295035e46aebcc10c3a50eb2746944] <==
	I1122 00:32:00.868518       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.868553       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:32:00.872425       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:32:00.872673       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:32:00.873020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:32:00.874007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:32:00.874139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:32:00.876100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:32:00.876533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:32:00.877193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:32:00.877282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:32:00.877294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:32:00.877294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:32:00.877355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:32:00.877385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:32:00.877402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:32:00.877370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:32:00.877585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:32:00.877804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:32:00.877965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:32:00.878003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:32:00.878204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:32:00.878224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:32:01.693388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1122 00:32:02.071431       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143473    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449cb4db-15d3-42ce-a449-5764483e7c28-lib-modules\") pod \"kube-proxy-lsc2k\" (UID: \"449cb4db-15d3-42ce-a449-5764483e7c28\") " pod="kube-system/kube-proxy-lsc2k"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143489    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/24592203-aaaf-4a3f-ba9d-d5982104710f-cni-cfg\") pod \"kindnet-57bxk\" (UID: \"24592203-aaaf-4a3f-ba9d-d5982104710f\") " pod="kube-system/kindnet-57bxk"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143504    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449cb4db-15d3-42ce-a449-5764483e7c28-xtables-lock\") pod \"kube-proxy-lsc2k\" (UID: \"449cb4db-15d3-42ce-a449-5764483e7c28\") " pod="kube-system/kube-proxy-lsc2k"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143520    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24592203-aaaf-4a3f-ba9d-d5982104710f-xtables-lock\") pod \"kindnet-57bxk\" (UID: \"24592203-aaaf-4a3f-ba9d-d5982104710f\") " pod="kube-system/kindnet-57bxk"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143615    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6bz\" (UniqueName: \"kubernetes.io/projected/24592203-aaaf-4a3f-ba9d-d5982104710f-kube-api-access-cq6bz\") pod \"kindnet-57bxk\" (UID: \"24592203-aaaf-4a3f-ba9d-d5982104710f\") " pod="kube-system/kindnet-57bxk"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143704    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjnzp\" (UniqueName: \"kubernetes.io/projected/449cb4db-15d3-42ce-a449-5764483e7c28-kube-api-access-tjnzp\") pod \"kube-proxy-lsc2k\" (UID: \"449cb4db-15d3-42ce-a449-5764483e7c28\") " pod="kube-system/kube-proxy-lsc2k"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: I1122 00:32:08.143740    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24592203-aaaf-4a3f-ba9d-d5982104710f-lib-modules\") pod \"kindnet-57bxk\" (UID: \"24592203-aaaf-4a3f-ba9d-d5982104710f\") " pod="kube-system/kindnet-57bxk"
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251138    1301 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251178    1301 projected.go:196] Error preparing data for projected volume kube-api-access-cq6bz for pod kube-system/kindnet-57bxk: configmap "kube-root-ca.crt" not found
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251270    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24592203-aaaf-4a3f-ba9d-d5982104710f-kube-api-access-cq6bz podName:24592203-aaaf-4a3f-ba9d-d5982104710f nodeName:}" failed. No retries permitted until 2025-11-22 00:32:08.751238502 +0000 UTC m=+5.835571240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cq6bz" (UniqueName: "kubernetes.io/projected/24592203-aaaf-4a3f-ba9d-d5982104710f-kube-api-access-cq6bz") pod "kindnet-57bxk" (UID: "24592203-aaaf-4a3f-ba9d-d5982104710f") : configmap "kube-root-ca.crt" not found
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251716    1301 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251744    1301 projected.go:196] Error preparing data for projected volume kube-api-access-tjnzp for pod kube-system/kube-proxy-lsc2k: configmap "kube-root-ca.crt" not found
	Nov 22 00:32:08 embed-certs-084979 kubelet[1301]: E1122 00:32:08.251808    1301 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/449cb4db-15d3-42ce-a449-5764483e7c28-kube-api-access-tjnzp podName:449cb4db-15d3-42ce-a449-5764483e7c28 nodeName:}" failed. No retries permitted until 2025-11-22 00:32:08.751787191 +0000 UTC m=+5.836119616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tjnzp" (UniqueName: "kubernetes.io/projected/449cb4db-15d3-42ce-a449-5764483e7c28-kube-api-access-tjnzp") pod "kube-proxy-lsc2k" (UID: "449cb4db-15d3-42ce-a449-5764483e7c28") : configmap "kube-root-ca.crt" not found
	Nov 22 00:32:10 embed-certs-084979 kubelet[1301]: I1122 00:32:10.075269    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-57bxk" podStartSLOduration=2.075247538 podStartE2EDuration="2.075247538s" podCreationTimestamp="2025-11-22 00:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:32:10.074978078 +0000 UTC m=+7.159310518" watchObservedRunningTime="2025-11-22 00:32:10.075247538 +0000 UTC m=+7.159579980"
	Nov 22 00:32:10 embed-certs-084979 kubelet[1301]: I1122 00:32:10.085599    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lsc2k" podStartSLOduration=2.085579085 podStartE2EDuration="2.085579085s" podCreationTimestamp="2025-11-22 00:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:32:10.085351341 +0000 UTC m=+7.169683776" watchObservedRunningTime="2025-11-22 00:32:10.085579085 +0000 UTC m=+7.169911527"
	Nov 22 00:32:49 embed-certs-084979 kubelet[1301]: I1122 00:32:49.831542    1301 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:32:49 embed-certs-084979 kubelet[1301]: I1122 00:32:49.920086    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7c32c7f-adfa-48a4-ad07-19cdb980d158-config-volume\") pod \"coredns-66bc5c9577-jjldt\" (UID: \"d7c32c7f-adfa-48a4-ad07-19cdb980d158\") " pod="kube-system/coredns-66bc5c9577-jjldt"
	Nov 22 00:32:49 embed-certs-084979 kubelet[1301]: I1122 00:32:49.920137    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49nd\" (UniqueName: \"kubernetes.io/projected/d7c32c7f-adfa-48a4-ad07-19cdb980d158-kube-api-access-v49nd\") pod \"coredns-66bc5c9577-jjldt\" (UID: \"d7c32c7f-adfa-48a4-ad07-19cdb980d158\") " pod="kube-system/coredns-66bc5c9577-jjldt"
	Nov 22 00:32:49 embed-certs-084979 kubelet[1301]: I1122 00:32:49.920168    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e38f62f3-8c2b-409c-86a6-e1dcc5d0b7a7-tmp\") pod \"storage-provisioner\" (UID: \"e38f62f3-8c2b-409c-86a6-e1dcc5d0b7a7\") " pod="kube-system/storage-provisioner"
	Nov 22 00:32:49 embed-certs-084979 kubelet[1301]: I1122 00:32:49.920267    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbbv\" (UniqueName: \"kubernetes.io/projected/e38f62f3-8c2b-409c-86a6-e1dcc5d0b7a7-kube-api-access-jlbbv\") pod \"storage-provisioner\" (UID: \"e38f62f3-8c2b-409c-86a6-e1dcc5d0b7a7\") " pod="kube-system/storage-provisioner"
	Nov 22 00:32:51 embed-certs-084979 kubelet[1301]: I1122 00:32:51.167430    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jjldt" podStartSLOduration=43.167408737 podStartE2EDuration="43.167408737s" podCreationTimestamp="2025-11-22 00:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:32:51.156670166 +0000 UTC m=+48.241002607" watchObservedRunningTime="2025-11-22 00:32:51.167408737 +0000 UTC m=+48.251741159"
	Nov 22 00:32:51 embed-certs-084979 kubelet[1301]: I1122 00:32:51.167626    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.167621008 podStartE2EDuration="42.167621008s" podCreationTimestamp="2025-11-22 00:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:32:51.166815694 +0000 UTC m=+48.251148135" watchObservedRunningTime="2025-11-22 00:32:51.167621008 +0000 UTC m=+48.251953452"
	Nov 22 00:32:53 embed-certs-084979 kubelet[1301]: I1122 00:32:53.142030    1301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7chz\" (UniqueName: \"kubernetes.io/projected/ed303111-f811-473a-89a1-52a608759f93-kube-api-access-s7chz\") pod \"busybox\" (UID: \"ed303111-f811-473a-89a1-52a608759f93\") " pod="default/busybox"
	Nov 22 00:32:55 embed-certs-084979 kubelet[1301]: I1122 00:32:55.175814    1301 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.443207136 podStartE2EDuration="2.175789322s" podCreationTimestamp="2025-11-22 00:32:53 +0000 UTC" firstStartedPulling="2025-11-22 00:32:53.341226761 +0000 UTC m=+50.425559194" lastFinishedPulling="2025-11-22 00:32:54.073808957 +0000 UTC m=+51.158141380" observedRunningTime="2025-11-22 00:32:55.175429628 +0000 UTC m=+52.259762069" watchObservedRunningTime="2025-11-22 00:32:55.175789322 +0000 UTC m=+52.260121763"
	Nov 22 00:33:01 embed-certs-084979 kubelet[1301]: E1122 00:33:01.107253    1301 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:52062->127.0.0.1:34511: write tcp 127.0.0.1:52062->127.0.0.1:34511: write: broken pipe
	
	
	==> storage-provisioner [b172a003734d410309f9a04131e02610b24ba7afeb48757e57deb4bda9ce1661] <==
	I1122 00:32:50.233586       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:32:50.244174       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:32:50.244284       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:32:50.246592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:50.252833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:32:50.253070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:32:50.253130       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85172fda-6e3b-4170-b156-9c1a3f0d4eef", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-084979_b2f17f5e-d68d-4a06-96db-05f84f4d9a23 became leader
	I1122 00:32:50.253263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_b2f17f5e-d68d-4a06-96db-05f84f4d9a23!
	W1122 00:32:50.255619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:50.258610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:32:50.354111       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_b2f17f5e-d68d-4a06-96db-05f84f4d9a23!
	W1122 00:32:52.261100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:52.264955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:54.267836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:54.271840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:56.275598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:56.279565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:58.282454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:32:58.298210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:00.302541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:00.306854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:02.310381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:02.314399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-084979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.535633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531189
helpers_test.go:243: (dbg) docker inspect newest-cni-531189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	        "Created": "2025-11-22T00:33:00.30734986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 267084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:00.349761598Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hosts",
	        "LogPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b-json.log",
	        "Name": "/newest-cni-531189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-531189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	                "LowerDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531189",
	                "Source": "/var/lib/docker/volumes/newest-cni-531189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531189",
	                "name.minikube.sigs.k8s.io": "newest-cni-531189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d9bbed9c536e61e21da49be08ab1d654e502dd498a7721dd5d165aeafe0ca236",
	            "SandboxKey": "/var/run/docker/netns/d9bbed9c536e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8187a3a6ebbb0612319b5aa920a8b27ea6d7a8c6a1abed3774766a0afd701a8",
	                    "EndpointID": "54fa076cdd51d06db78eb76a3936aee2a9f6ef4cc3169da9f209f90e424a760c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "1e:06:eb:db:c5:17",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531189",
	                        "65c93ca66378"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531189 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-983546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:33:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:33:19.523863  271909 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:19.524178  271909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:19.524190  271909 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:19.524197  271909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:19.524505  271909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:33:19.525213  271909 out.go:368] Setting JSON to false
	I1122 00:33:19.526746  271909 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4548,"bootTime":1763767051,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:33:19.526818  271909 start.go:143] virtualization: kvm guest
	I1122 00:33:19.528603  271909 out.go:179] * [embed-certs-084979] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:33:19.530373  271909 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:33:19.530375  271909 notify.go:221] Checking for updates...
	I1122 00:33:19.531514  271909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:33:19.532591  271909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:19.533700  271909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:33:19.534719  271909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:33:19.538226  271909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:15.952133  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:15.952504  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:15.952555  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:15.952607  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:15.983913  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:15.983938  218533 cri.go:89] found id: ""
	I1122 00:33:15.983948  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:15.984011  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:15.988316  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:15.988382  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:16.018200  218533 cri.go:89] found id: ""
	I1122 00:33:16.018228  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.018237  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:16.018244  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:16.018302  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:16.047751  218533 cri.go:89] found id: ""
	I1122 00:33:16.047778  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.047788  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:16.047797  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:16.047851  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:16.077525  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:16.077548  218533 cri.go:89] found id: ""
	I1122 00:33:16.077558  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:16.077622  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:16.082511  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:16.082574  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:16.113401  218533 cri.go:89] found id: ""
	I1122 00:33:16.113425  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.113435  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:16.113443  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:16.113496  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:16.143080  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:16.143103  218533 cri.go:89] found id: ""
	I1122 00:33:16.143113  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:16.143168  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:16.147833  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:16.147897  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:16.176876  218533 cri.go:89] found id: ""
	I1122 00:33:16.176901  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.176911  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:16.176918  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:16.176973  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:16.206965  218533 cri.go:89] found id: ""
	I1122 00:33:16.206991  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.206999  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:16.207008  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:16.207019  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:16.223103  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:16.223132  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:16.287753  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:16.287785  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:16.287801  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:16.324180  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:16.324217  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:16.390653  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:16.390687  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:16.418400  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:16.418427  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:33:16.476172  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:16.476200  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:16.507721  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:16.507748  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:19.098131  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:19.098570  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:19.098626  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:19.098676  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:19.129130  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:19.129154  218533 cri.go:89] found id: ""
	I1122 00:33:19.129165  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:19.129217  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.133133  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:19.133195  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:19.162511  218533 cri.go:89] found id: ""
	I1122 00:33:19.162539  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.162550  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:19.162556  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:19.162612  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:19.200402  218533 cri.go:89] found id: ""
	I1122 00:33:19.200424  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.200431  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:19.200437  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:19.200491  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:19.234947  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:19.234967  218533 cri.go:89] found id: ""
	I1122 00:33:19.234977  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:19.235034  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.238893  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:19.238955  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:19.267650  218533 cri.go:89] found id: ""
	I1122 00:33:19.267674  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.267684  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:19.267692  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:19.267747  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:19.297420  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:19.297441  218533 cri.go:89] found id: ""
	I1122 00:33:19.297452  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:19.297512  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.302425  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:19.302489  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:19.331630  218533 cri.go:89] found id: ""
	I1122 00:33:19.331653  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.331664  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:19.331671  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:19.331724  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:19.361351  218533 cri.go:89] found id: ""
	I1122 00:33:19.361372  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.361379  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:19.361386  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:19.361398  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:33:19.425387  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:19.425415  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:19.456976  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:19.456999  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:19.539724  271909 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:19.540391  271909 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:33:19.567024  271909 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:33:19.567207  271909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:19.627997  271909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:19.618189395 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:19.628123  271909 docker.go:319] overlay module found
	I1122 00:33:19.629763  271909 out.go:179] * Using the docker driver based on existing profile
	I1122 00:33:19.630932  271909 start.go:309] selected driver: docker
	I1122 00:33:19.630946  271909 start.go:930] validating driver "docker" against &{Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:19.631045  271909 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:33:19.631638  271909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:19.693883  271909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:19.682475533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:19.694278  271909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:19.694327  271909 cni.go:84] Creating CNI manager for ""
	I1122 00:33:19.694397  271909 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:33:19.694451  271909 start.go:353] cluster config:
	{Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:19.696012  271909 out.go:179] * Starting "embed-certs-084979" primary control-plane node in "embed-certs-084979" cluster
	I1122 00:33:19.697282  271909 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:33:19.698480  271909 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:33:19.699511  271909 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:33:19.699539  271909 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:33:19.699550  271909 cache.go:65] Caching tarball of preloaded images
	I1122 00:33:19.699594  271909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:33:19.699647  271909 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:33:19.699660  271909 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:33:19.699762  271909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:33:19.721368  271909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:33:19.721386  271909 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:33:19.721401  271909 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:33:19.721431  271909 start.go:360] acquireMachinesLock for embed-certs-084979: {Name:mkdbb4c4ccc5b23cd8525c30101b33a32058591d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:33:19.721495  271909 start.go:364] duration metric: took 42.563µs to acquireMachinesLock for "embed-certs-084979"
	I1122 00:33:19.721516  271909 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:33:19.721526  271909 fix.go:54] fixHost starting: 
	I1122 00:33:19.721770  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:19.738738  271909 fix.go:112] recreateIfNeeded on embed-certs-084979: state=Stopped err=<nil>
	W1122 00:33:19.738779  271909 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:33:18.206028  266241 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:33:18.211178  266241 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:33:18.211198  266241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:33:18.224333  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:33:18.424680  266241 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:33:18.424767  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:18.424791  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-531189 minikube.k8s.io/updated_at=2025_11_22T00_33_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=newest-cni-531189 minikube.k8s.io/primary=true
	I1122 00:33:18.433879  266241 ops.go:34] apiserver oom_adj: -16
	I1122 00:33:18.504904  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:19.005274  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:19.505002  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:20.004976  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:20.505197  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:21.005910  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:21.505843  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:22.005517  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:22.505948  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:23.005149  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:23.067592  266241 kubeadm.go:1114] duration metric: took 4.642882926s to wait for elevateKubeSystemPrivileges
	I1122 00:33:23.067628  266241 kubeadm.go:403] duration metric: took 15.612157621s to StartCluster
	I1122 00:33:23.067651  266241 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:23.067719  266241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:23.069296  266241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:23.069537  266241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:33:23.069552  266241 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:33:23.069618  266241 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:33:23.069752  266241 config.go:182] Loaded profile config "newest-cni-531189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:23.069770  266241 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531189"
	I1122 00:33:23.069798  266241 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531189"
	I1122 00:33:23.069792  266241 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531189"
	I1122 00:33:23.069835  266241 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:23.069831  266241 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531189"
	I1122 00:33:23.070250  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.070354  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.070903  266241 out.go:179] * Verifying Kubernetes components...
	I1122 00:33:23.072129  266241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:33:23.094413  266241 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:33:23.094543  266241 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531189"
	I1122 00:33:23.094584  266241 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:23.095102  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.095600  266241 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:33:23.095618  266241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:33:23.095666  266241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531189
	I1122 00:33:23.121395  266241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/newest-cni-531189/id_rsa Username:docker}
	I1122 00:33:23.128262  266241 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:33:23.128287  266241 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:33:23.128349  266241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531189
	I1122 00:33:23.154810  266241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/newest-cni-531189/id_rsa Username:docker}
	I1122 00:33:23.168180  266241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:33:23.224691  266241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:33:23.235352  266241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:33:23.273657  266241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:33:23.364555  266241 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:33:23.365010  266241 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:33:23.365109  266241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:33:23.569254  266241 api_server.go:72] duration metric: took 499.667104ms to wait for apiserver process to appear ...
	I1122 00:33:23.569291  266241 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:33:23.569312  266241 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:33:23.575380  266241 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:33:23.576259  266241 api_server.go:141] control plane version: v1.34.1
	I1122 00:33:23.576289  266241 api_server.go:131] duration metric: took 6.990367ms to wait for apiserver health ...
	I1122 00:33:23.576301  266241 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:33:23.577787  266241 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:33:23.579254  266241 system_pods.go:59] 9 kube-system pods found
	I1122 00:33:23.579295  266241 addons.go:530] duration metric: took 509.676957ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:33:23.579362  266241 system_pods.go:61] "coredns-66bc5c9577-72kgm" [3bcbb420-b262-4fbf-a58a-71256d3fd603] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579402  266241 system_pods.go:61] "coredns-66bc5c9577-bc2kh" [0b3f98b7-386c-4f52-825d-4c30ae7caa9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579414  266241 system_pods.go:61] "etcd-newest-cni-531189" [04d1cce2-a1f6-4f51-9bcc-8f7080701d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:23.579424  266241 system_pods.go:61] "kindnet-2r5vl" [e3ab47c0-fc8c-4b02-8905-b3975fc5fe58] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:23.579433  266241 system_pods.go:61] "kube-apiserver-newest-cni-531189" [f5b06aeb-cb12-4c70-8eb2-4334f77ce4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:23.579441  266241 system_pods.go:61] "kube-controller-manager-newest-cni-531189" [bdef5fc8-ce47-48eb-9109-2a7505f50fad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:23.579450  266241 system_pods.go:61] "kube-proxy-x8pr8" [5b238c04-98fa-46db-91e7-73a2ff0cb690] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:23.579458  266241 system_pods.go:61] "kube-scheduler-newest-cni-531189" [61447088-de05-4c9a-88f1-50f0e78aace7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:23.579464  266241 system_pods.go:61] "storage-provisioner" [db3f32ea-4aa1-4ccf-aebb-39d818606a7e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579472  266241 system_pods.go:74] duration metric: took 3.16404ms to wait for pod list to return data ...
	I1122 00:33:23.579481  266241 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:33:23.582158  266241 default_sa.go:45] found service account: "default"
	I1122 00:33:23.582211  266241 default_sa.go:55] duration metric: took 2.72223ms for default service account to be created ...
	I1122 00:33:23.582241  266241 kubeadm.go:587] duration metric: took 512.661168ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:33:23.582285  266241 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:33:23.584569  266241 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:33:23.584601  266241 node_conditions.go:123] node cpu capacity is 8
	I1122 00:33:23.584620  266241 node_conditions.go:105] duration metric: took 2.318301ms to run NodePressure ...
	I1122 00:33:23.584634  266241 start.go:242] waiting for startup goroutines ...
	I1122 00:33:23.869233  266241 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-531189" context rescaled to 1 replicas
	I1122 00:33:23.869277  266241 start.go:247] waiting for cluster config update ...
	I1122 00:33:23.869292  266241 start.go:256] writing updated cluster config ...
	I1122 00:33:23.869597  266241 ssh_runner.go:195] Run: rm -f paused
	I1122 00:33:23.918865  266241 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:33:23.921412  266241 out.go:179] * Done! kubectl is now configured to use "newest-cni-531189" cluster and "default" namespace by default
	I1122 00:33:19.740262  271909 out.go:252] * Restarting existing docker container for "embed-certs-084979" ...
	I1122 00:33:19.740347  271909 cli_runner.go:164] Run: docker start embed-certs-084979
	I1122 00:33:20.036119  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:20.055996  271909 kic.go:430] container "embed-certs-084979" state is running.
	I1122 00:33:20.056369  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:20.076119  271909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:33:20.076314  271909 machine.go:94] provisionDockerMachine start ...
	I1122 00:33:20.076380  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:20.094365  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:20.094671  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:20.094695  271909 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:33:20.095470  271909 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47732->127.0.0.1:33088: read: connection reset by peer
	I1122 00:33:23.239165  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:33:23.239193  271909 ubuntu.go:182] provisioning hostname "embed-certs-084979"
	I1122 00:33:23.239270  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.267203  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.267504  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.267521  271909 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-084979 && echo "embed-certs-084979" | sudo tee /etc/hostname
	I1122 00:33:23.415022  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:33:23.415186  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.442968  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.443355  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.443381  271909 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-084979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-084979/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-084979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:33:23.578909  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:33:23.578936  271909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:33:23.578952  271909 ubuntu.go:190] setting up certificates
	I1122 00:33:23.578962  271909 provision.go:84] configureAuth start
	I1122 00:33:23.579012  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:23.600564  271909 provision.go:143] copyHostCerts
	I1122 00:33:23.600616  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:33:23.600629  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:33:23.600689  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:33:23.600788  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:33:23.600797  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:33:23.600823  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:33:23.600891  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:33:23.600898  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:33:23.600930  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:33:23.600994  271909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.embed-certs-084979 san=[127.0.0.1 192.168.94.2 embed-certs-084979 localhost minikube]
	I1122 00:33:23.641972  271909 provision.go:177] copyRemoteCerts
	I1122 00:33:23.642021  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:33:23.642067  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.659300  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:23.750824  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:33:23.767644  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:33:23.783985  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:33:23.800259  271909 provision.go:87] duration metric: took 221.28208ms to configureAuth
	I1122 00:33:23.800281  271909 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:33:23.800456  271909 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:23.800557  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.818857  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.819105  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.819122  271909 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:33:24.141432  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:33:24.141463  271909 machine.go:97] duration metric: took 4.065134365s to provisionDockerMachine
	I1122 00:33:24.141478  271909 start.go:293] postStartSetup for "embed-certs-084979" (driver="docker")
	I1122 00:33:24.141490  271909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:33:24.141548  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:33:24.141620  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.162474  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.255746  271909 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:33:24.259451  271909 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:33:24.259484  271909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:33:24.259495  271909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:33:24.259540  271909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:33:24.259640  271909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:33:24.259760  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:33:24.268087  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:33:24.285792  271909 start.go:296] duration metric: took 144.303637ms for postStartSetup
	I1122 00:33:24.285870  271909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:33:24.285907  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.305129  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.394758  271909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:33:24.399105  271909 fix.go:56] duration metric: took 4.677573464s for fixHost
	I1122 00:33:24.399135  271909 start.go:83] releasing machines lock for "embed-certs-084979", held for 4.677619645s
	I1122 00:33:24.399190  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:24.417429  271909 ssh_runner.go:195] Run: cat /version.json
	I1122 00:33:24.417500  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.417541  271909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:33:24.417599  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.437333  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.437447  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:19.569689  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:19.569722  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:19.587535  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:19.587565  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:19.653838  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:19.653862  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:19.653879  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:19.693182  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:19.693218  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:19.755020  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:19.755049  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.286560  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:22.286991  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:22.287042  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:22.287123  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:22.316409  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:22.316427  218533 cri.go:89] found id: ""
	I1122 00:33:22.316435  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:22.316480  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.320282  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:22.320343  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:22.345912  218533 cri.go:89] found id: ""
	I1122 00:33:22.345941  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.345950  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:22.345956  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:22.346006  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:22.373204  218533 cri.go:89] found id: ""
	I1122 00:33:22.373229  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.373240  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:22.373251  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:22.373304  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:22.399781  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:22.399803  218533 cri.go:89] found id: ""
	I1122 00:33:22.399814  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:22.399860  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.403445  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:22.403493  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:22.428150  218533 cri.go:89] found id: ""
	I1122 00:33:22.428172  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.428182  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:22.428187  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:22.428245  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:22.453012  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.453038  218533 cri.go:89] found id: ""
	I1122 00:33:22.453048  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:22.453118  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.456609  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:22.456666  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:22.481421  218533 cri.go:89] found id: ""
	I1122 00:33:22.481444  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.481452  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:22.481458  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:22.481507  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:22.507715  218533 cri.go:89] found id: ""
	I1122 00:33:22.507739  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.507748  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:22.507759  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:22.507781  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:22.544814  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:22.544846  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:22.638926  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:22.638954  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:22.653185  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:22.653210  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:22.709636  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:22.709661  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:22.709683  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:22.739831  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:22.739858  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:22.796990  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:22.797022  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.822631  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:22.822656  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.424127409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.424306271Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6b15471c-9dea-4233-85c4-9d8eacf657d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.42665884Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.427577981Z" level=info msg="Ran pod sandbox 0ca2da78061f0e4114ac7e15e7e83a377feaa3d9db70a92326cdb4b5f1e970ff with infra container: kube-system/kube-proxy-x8pr8/POD" id=6b15471c-9dea-4233-85c4-9d8eacf657d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.427887341Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f19b49e1-7ac1-40b9-9d44-c11395c1169c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.428993141Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ead04026-5d31-4f35-af0f-fa9329894ad3 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.429435247Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.43032119Z" level=info msg="Ran pod sandbox 0bf7cb733290932a25bfa25076d131b5491c026f32974613016b5befd04cc869 with infra container: kube-system/kindnet-2r5vl/POD" id=f19b49e1-7ac1-40b9-9d44-c11395c1169c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.431207338Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ff53b666-042d-4c15-a29b-559f242d60f8 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.43126884Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a4e43b0a-e1b5-4711-8e46-f6c171fc8a1b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.432278611Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=11836593-f12c-4dba-8f59-154c6ac30f3f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.436443138Z" level=info msg="Creating container: kube-system/kube-proxy-x8pr8/kube-proxy" id=30383588-8e93-4691-9af3-d0a28ae19a28 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.436566516Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.437911566Z" level=info msg="Creating container: kube-system/kindnet-2r5vl/kindnet-cni" id=4646dc06-31d4-496d-9abe-5ae865ca9a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.438008295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.446183125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.446795177Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.448411346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.448932802Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.478312456Z" level=info msg="Created container adb8e1862c256029ae634d8933c2d96c7257a10833dc6bed9305599347b98bfa: kube-system/kindnet-2r5vl/kindnet-cni" id=4646dc06-31d4-496d-9abe-5ae865ca9a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.478922187Z" level=info msg="Starting container: adb8e1862c256029ae634d8933c2d96c7257a10833dc6bed9305599347b98bfa" id=889c67ec-fc6d-45c9-b9cc-67934faac60b name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.479391772Z" level=info msg="Created container 9bdec5c8e96e487292bd2805da8e3711d56b794e9f8d202e9dfd27533369e81f: kube-system/kube-proxy-x8pr8/kube-proxy" id=30383588-8e93-4691-9af3-d0a28ae19a28 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.479835511Z" level=info msg="Starting container: 9bdec5c8e96e487292bd2805da8e3711d56b794e9f8d202e9dfd27533369e81f" id=d85f76c2-aa0c-4b5c-87ca-c30186ba4958 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.480995854Z" level=info msg="Started container" PID=1636 containerID=adb8e1862c256029ae634d8933c2d96c7257a10833dc6bed9305599347b98bfa description=kube-system/kindnet-2r5vl/kindnet-cni id=889c67ec-fc6d-45c9-b9cc-67934faac60b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0bf7cb733290932a25bfa25076d131b5491c026f32974613016b5befd04cc869
	Nov 22 00:33:23 newest-cni-531189 crio[780]: time="2025-11-22T00:33:23.482659762Z" level=info msg="Started container" PID=1635 containerID=9bdec5c8e96e487292bd2805da8e3711d56b794e9f8d202e9dfd27533369e81f description=kube-system/kube-proxy-x8pr8/kube-proxy id=d85f76c2-aa0c-4b5c-87ca-c30186ba4958 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ca2da78061f0e4114ac7e15e7e83a377feaa3d9db70a92326cdb4b5f1e970ff
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	adb8e1862c256       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   0bf7cb7332909       kindnet-2r5vl                               kube-system
	9bdec5c8e96e4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   0ca2da78061f0       kube-proxy-x8pr8                            kube-system
	5bf6f75c5d5af       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   d24816335da93       kube-scheduler-newest-cni-531189            kube-system
	026efc124b056       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   c5cbbae9ee05b       kube-controller-manager-newest-cni-531189   kube-system
	9695ae45ecd8e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   d2067f22dab51       etcd-newest-cni-531189                      kube-system
	13ed287048127       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   3e1ba07ce5f4a       kube-apiserver-newest-cni-531189            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-531189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:33:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531189
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:33:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:17 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:17 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:17 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:33:17 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                2ea5badc-2e5c-4528-82d1-003ac6cb3bf5
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531189                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-2r5vl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-531189             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-531189    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-x8pr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-531189             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-531189 event: Registered Node newest-cni-531189 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [9695ae45ecd8e0c58d560d651a8cb1cf29901a33e49a600a1453362f4d7b0e3a] <==
	{"level":"warn","ts":"2025-11-22T00:33:14.286131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.292013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.301277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.308335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.315739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.322628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.329379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.336432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.342442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.349458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.359164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.365594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.371456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.378031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.385301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.392899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.400695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.407760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.413899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.421150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.427715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.433620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.450028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.456245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:14.513644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57694","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:33:25 up  1:15,  0 user,  load average: 2.23, 2.79, 1.84
	Linux newest-cni-531189 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [adb8e1862c256029ae634d8933c2d96c7257a10833dc6bed9305599347b98bfa] <==
	I1122 00:33:23.669267       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:23.669487       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:33:23.669621       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:23.669637       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:23.669657       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:23.965092       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:23.965133       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:23.965146       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:23.966238       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:24.265227       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:24.265369       1 metrics.go:72] Registering metrics
	I1122 00:33:24.265432       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [13ed287048127001e1d5f6f2952e18c7c2386d90176d63dd9826116457ae46ce] <==
	I1122 00:33:14.980731       1 policy_source.go:240] refreshing policies
	E1122 00:33:15.031205       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:33:15.078336       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:15.082130       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:33:15.082178       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:33:15.087870       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:33:15.087941       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:15.164654       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:15.880683       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:33:15.884238       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:33:15.884255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:16.318534       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:16.354545       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:16.487301       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:33:16.493430       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:33:16.494435       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:16.498800       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:17.347582       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:17.595649       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:17.603693       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:33:17.609892       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:33:22.499554       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:33:23.400718       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:23.453300       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:33:23.457749       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [026efc124b056446626926ccda1c66216be5d3bcab0cd8d1d1bf79e8da17b2d8] <==
	I1122 00:33:22.303700       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:33:22.303714       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:33:22.303722       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:33:22.308868       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:22.309103       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-531189" podCIDRs=["10.42.0.0/24"]
	I1122 00:33:22.311170       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:33:22.346196       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:33:22.346206       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:33:22.346251       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:33:22.347366       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:33:22.347396       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:22.347470       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:33:22.347478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:33:22.347488       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:33:22.347522       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:33:22.347964       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:33:22.347994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:33:22.348630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:33:22.349808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:33:22.349866       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:33:22.351063       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:22.351066       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:22.351774       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:33:22.357813       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:33:22.372555       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9bdec5c8e96e487292bd2805da8e3711d56b794e9f8d202e9dfd27533369e81f] <==
	I1122 00:33:23.523816       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:23.598163       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:23.699104       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:23.699155       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:33:23.699264       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:23.716842       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:23.716892       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:23.721864       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:23.722280       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:23.722316       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:23.723659       1 config.go:309] "Starting node config controller"
	I1122 00:33:23.723700       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:23.723712       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:23.723791       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:23.723807       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:23.723834       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:23.723842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:23.723844       1 config.go:200] "Starting service config controller"
	I1122 00:33:23.723865       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:23.824212       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:23.824288       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:33:23.824338       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5bf6f75c5d5afa9de50902c65b3f73c9ead7798e22ef94ea814fd4e0180836e4] <==
	E1122 00:33:14.933958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:33:14.934092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:33:14.934154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:33:14.934169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:33:14.934202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:33:14.934225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:33:14.934238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:33:14.934292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:33:14.934336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:33:14.934381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:33:14.934406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:33:15.818422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:33:15.871644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:33:15.891618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:33:15.897544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:33:15.912344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:33:15.919329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:33:15.954680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:33:15.977774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:33:15.997491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:33:16.003597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:33:16.112223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:33:16.139631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:33:16.158930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1122 00:33:18.631088       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: I1122 00:33:18.445992    1340 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: E1122 00:33:18.451006    1340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531189\" already exists" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: E1122 00:33:18.452206    1340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-531189\" already exists" pod="kube-system/kube-scheduler-newest-cni-531189"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: E1122 00:33:18.452967    1340 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531189\" already exists" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: I1122 00:33:18.463949    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-531189" podStartSLOduration=1.463933205 podStartE2EDuration="1.463933205s" podCreationTimestamp="2025-11-22 00:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:18.46384044 +0000 UTC m=+1.126287161" watchObservedRunningTime="2025-11-22 00:33:18.463933205 +0000 UTC m=+1.126379898"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: I1122 00:33:18.471751    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-531189" podStartSLOduration=1.471733506 podStartE2EDuration="1.471733506s" podCreationTimestamp="2025-11-22 00:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:18.471539304 +0000 UTC m=+1.133986020" watchObservedRunningTime="2025-11-22 00:33:18.471733506 +0000 UTC m=+1.134180203"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: I1122 00:33:18.478710    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-531189" podStartSLOduration=2.478696082 podStartE2EDuration="2.478696082s" podCreationTimestamp="2025-11-22 00:33:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:18.47825512 +0000 UTC m=+1.140701833" watchObservedRunningTime="2025-11-22 00:33:18.478696082 +0000 UTC m=+1.141142779"
	Nov 22 00:33:18 newest-cni-531189 kubelet[1340]: I1122 00:33:18.498617    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-531189" podStartSLOduration=1.498602334 podStartE2EDuration="1.498602334s" podCreationTimestamp="2025-11-22 00:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:18.489462956 +0000 UTC m=+1.151909653" watchObservedRunningTime="2025-11-22 00:33:18.498602334 +0000 UTC m=+1.161049029"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.361616    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.362328    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550795    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b238c04-98fa-46db-91e7-73a2ff0cb690-kube-proxy\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550848    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-xtables-lock\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550880    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-xtables-lock\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550904    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-lib-modules\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550932    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25h4\" (UniqueName: \"kubernetes.io/projected/5b238c04-98fa-46db-91e7-73a2ff0cb690-kube-api-access-v25h4\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550956    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-lib-modules\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.550977    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnp9f\" (UniqueName: \"kubernetes.io/projected/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-kube-api-access-tnp9f\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: I1122 00:33:22.551013    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-cni-cfg\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.656985    1340 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.657015    1340 projected.go:196] Error preparing data for projected volume kube-api-access-tnp9f for pod kube-system/kindnet-2r5vl: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.657100    1340 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-kube-api-access-tnp9f podName:e3ab47c0-fc8c-4b02-8905-b3975fc5fe58 nodeName:}" failed. No retries permitted until 2025-11-22 00:33:23.157071943 +0000 UTC m=+5.819518620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tnp9f" (UniqueName: "kubernetes.io/projected/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-kube-api-access-tnp9f") pod "kindnet-2r5vl" (UID: "e3ab47c0-fc8c-4b02-8905-b3975fc5fe58") : configmap "kube-root-ca.crt" not found
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.657545    1340 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.657569    1340 projected.go:196] Error preparing data for projected volume kube-api-access-v25h4 for pod kube-system/kube-proxy-x8pr8: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:22 newest-cni-531189 kubelet[1340]: E1122 00:33:22.657612    1340 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b238c04-98fa-46db-91e7-73a2ff0cb690-kube-api-access-v25h4 podName:5b238c04-98fa-46db-91e7-73a2ff0cb690 nodeName:}" failed. No retries permitted until 2025-11-22 00:33:23.157598806 +0000 UTC m=+5.820045493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v25h4" (UniqueName: "kubernetes.io/projected/5b238c04-98fa-46db-91e7-73a2ff0cb690-kube-api-access-v25h4") pod "kube-proxy-x8pr8" (UID: "5b238c04-98fa-46db-91e7-73a2ff0cb690") : configmap "kube-root-ca.crt" not found
	Nov 22 00:33:24 newest-cni-531189 kubelet[1340]: I1122 00:33:24.485648    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2r5vl" podStartSLOduration=2.485623219 podStartE2EDuration="2.485623219s" podCreationTimestamp="2025-11-22 00:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:24.474300626 +0000 UTC m=+7.136747347" watchObservedRunningTime="2025-11-22 00:33:24.485623219 +0000 UTC m=+7.148069915"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bc2kh storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner: exit status 1 (66.790745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bc2kh" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.309889ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-046175 describe deploy/metrics-server -n kube-system: exit status 1 (76.941329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-046175 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-046175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-046175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	        "Created": "2025-11-22T00:32:41.655265951Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:32:41.686237409Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hostname",
	        "HostsPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hosts",
	        "LogPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e-json.log",
	        "Name": "/default-k8s-diff-port-046175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-046175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-046175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	                "LowerDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-046175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-046175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-046175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "4ae0c6298cf82f1560aedb6006c5b4a4754d6ee010360eed719460d4f4ea7543",
	            "SandboxKey": "/var/run/docker/netns/4ae0c6298cf8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-046175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85b8c03d926ba0e46aa73effaa1a551cb600a9455d371f54191cd0d2f0a6ca5c",
	                    "EndpointID": "c685193593b9ebdc43db8567e863a1e817a57b7c0ca6a580e21a49d5a158ba34",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "96:ba:a9:f7:df:a3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-046175",
	                        "45fe2cf873e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25: (1.094745494s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ stop    │ -p no-preload-983546 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ delete  │ -p cert-expiration-624739                                                                                                                                                                                                                     │ cert-expiration-624739       │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ addons  │ enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:31 UTC │
	│ start   │ -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:31 UTC │ 22 Nov 25 00:32 UTC │
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:33:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:33:19.523863  271909 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:19.524178  271909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:19.524190  271909 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:19.524197  271909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:19.524505  271909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:33:19.525213  271909 out.go:368] Setting JSON to false
	I1122 00:33:19.526746  271909 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4548,"bootTime":1763767051,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:33:19.526818  271909 start.go:143] virtualization: kvm guest
	I1122 00:33:19.528603  271909 out.go:179] * [embed-certs-084979] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:33:19.530373  271909 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:33:19.530375  271909 notify.go:221] Checking for updates...
	I1122 00:33:19.531514  271909 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:33:19.532591  271909 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:19.533700  271909 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:33:19.534719  271909 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:33:19.538226  271909 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:15.952133  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:15.952504  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:15.952555  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:15.952607  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:15.983913  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:15.983938  218533 cri.go:89] found id: ""
	I1122 00:33:15.983948  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:15.984011  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:15.988316  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:15.988382  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:16.018200  218533 cri.go:89] found id: ""
	I1122 00:33:16.018228  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.018237  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:16.018244  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:16.018302  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:16.047751  218533 cri.go:89] found id: ""
	I1122 00:33:16.047778  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.047788  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:16.047797  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:16.047851  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:16.077525  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:16.077548  218533 cri.go:89] found id: ""
	I1122 00:33:16.077558  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:16.077622  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:16.082511  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:16.082574  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:16.113401  218533 cri.go:89] found id: ""
	I1122 00:33:16.113425  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.113435  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:16.113443  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:16.113496  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:16.143080  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:16.143103  218533 cri.go:89] found id: ""
	I1122 00:33:16.143113  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:16.143168  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:16.147833  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:16.147897  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:16.176876  218533 cri.go:89] found id: ""
	I1122 00:33:16.176901  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.176911  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:16.176918  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:16.176973  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:16.206965  218533 cri.go:89] found id: ""
	I1122 00:33:16.206991  218533 logs.go:282] 0 containers: []
	W1122 00:33:16.206999  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:16.207008  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:16.207019  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:16.223103  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:16.223132  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:16.287753  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:16.287785  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:16.287801  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:16.324180  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:16.324217  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:16.390653  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:16.390687  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:16.418400  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:16.418427  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:33:16.476172  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:16.476200  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:16.507721  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:16.507748  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:19.098131  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:19.098570  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:19.098626  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:19.098676  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:19.129130  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:19.129154  218533 cri.go:89] found id: ""
	I1122 00:33:19.129165  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:19.129217  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.133133  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:19.133195  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:19.162511  218533 cri.go:89] found id: ""
	I1122 00:33:19.162539  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.162550  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:19.162556  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:19.162612  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:19.200402  218533 cri.go:89] found id: ""
	I1122 00:33:19.200424  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.200431  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:19.200437  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:19.200491  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:19.234947  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:19.234967  218533 cri.go:89] found id: ""
	I1122 00:33:19.234977  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:19.235034  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.238893  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:19.238955  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:19.267650  218533 cri.go:89] found id: ""
	I1122 00:33:19.267674  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.267684  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:19.267692  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:19.267747  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:19.297420  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:19.297441  218533 cri.go:89] found id: ""
	I1122 00:33:19.297452  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:19.297512  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:19.302425  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:19.302489  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:19.331630  218533 cri.go:89] found id: ""
	I1122 00:33:19.331653  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.331664  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:19.331671  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:19.331724  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:19.361351  218533 cri.go:89] found id: ""
	I1122 00:33:19.361372  218533 logs.go:282] 0 containers: []
	W1122 00:33:19.361379  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:19.361386  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:19.361398  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:33:19.425387  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:19.425415  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:19.456976  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:19.456999  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:19.539724  271909 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:19.540391  271909 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:33:19.567024  271909 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:33:19.567207  271909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:19.627997  271909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:19.618189395 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:19.628123  271909 docker.go:319] overlay module found
	I1122 00:33:19.629763  271909 out.go:179] * Using the docker driver based on existing profile
	I1122 00:33:19.630932  271909 start.go:309] selected driver: docker
	I1122 00:33:19.630946  271909 start.go:930] validating driver "docker" against &{Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:19.631045  271909 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:33:19.631638  271909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:19.693883  271909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:19.682475533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:19.694278  271909 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:19.694327  271909 cni.go:84] Creating CNI manager for ""
	I1122 00:33:19.694397  271909 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:33:19.694451  271909 start.go:353] cluster config:
	{Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:19.696012  271909 out.go:179] * Starting "embed-certs-084979" primary control-plane node in "embed-certs-084979" cluster
	I1122 00:33:19.697282  271909 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:33:19.698480  271909 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:33:19.699511  271909 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:33:19.699539  271909 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:33:19.699550  271909 cache.go:65] Caching tarball of preloaded images
	I1122 00:33:19.699594  271909 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:33:19.699647  271909 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:33:19.699660  271909 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:33:19.699762  271909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:33:19.721368  271909 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:33:19.721386  271909 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:33:19.721401  271909 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:33:19.721431  271909 start.go:360] acquireMachinesLock for embed-certs-084979: {Name:mkdbb4c4ccc5b23cd8525c30101b33a32058591d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:33:19.721495  271909 start.go:364] duration metric: took 42.563µs to acquireMachinesLock for "embed-certs-084979"
	I1122 00:33:19.721516  271909 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:33:19.721526  271909 fix.go:54] fixHost starting: 
	I1122 00:33:19.721770  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:19.738738  271909 fix.go:112] recreateIfNeeded on embed-certs-084979: state=Stopped err=<nil>
	W1122 00:33:19.738779  271909 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:33:18.206028  266241 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:33:18.211178  266241 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:33:18.211198  266241 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:33:18.224333  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:33:18.424680  266241 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:33:18.424767  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:18.424791  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-531189 minikube.k8s.io/updated_at=2025_11_22T00_33_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=newest-cni-531189 minikube.k8s.io/primary=true
	I1122 00:33:18.433879  266241 ops.go:34] apiserver oom_adj: -16
	I1122 00:33:18.504904  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:19.005274  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:19.505002  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:20.004976  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:20.505197  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:21.005910  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:21.505843  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:22.005517  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:22.505948  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:23.005149  266241 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:33:23.067592  266241 kubeadm.go:1114] duration metric: took 4.642882926s to wait for elevateKubeSystemPrivileges
	I1122 00:33:23.067628  266241 kubeadm.go:403] duration metric: took 15.612157621s to StartCluster
	I1122 00:33:23.067651  266241 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:23.067719  266241 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:23.069296  266241 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:23.069537  266241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:33:23.069552  266241 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:33:23.069618  266241 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:33:23.069752  266241 config.go:182] Loaded profile config "newest-cni-531189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:23.069770  266241 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531189"
	I1122 00:33:23.069798  266241 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531189"
	I1122 00:33:23.069792  266241 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531189"
	I1122 00:33:23.069835  266241 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:23.069831  266241 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531189"
	I1122 00:33:23.070250  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.070354  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.070903  266241 out.go:179] * Verifying Kubernetes components...
	I1122 00:33:23.072129  266241 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:33:23.094413  266241 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:33:23.094543  266241 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531189"
	I1122 00:33:23.094584  266241 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:23.095102  266241 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:23.095600  266241 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:33:23.095618  266241 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:33:23.095666  266241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531189
	I1122 00:33:23.121395  266241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/newest-cni-531189/id_rsa Username:docker}
	I1122 00:33:23.128262  266241 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:33:23.128287  266241 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:33:23.128349  266241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531189
	I1122 00:33:23.154810  266241 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/newest-cni-531189/id_rsa Username:docker}
	I1122 00:33:23.168180  266241 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:33:23.224691  266241 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:33:23.235352  266241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:33:23.273657  266241 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:33:23.364555  266241 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:33:23.365010  266241 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:33:23.365109  266241 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:33:23.569254  266241 api_server.go:72] duration metric: took 499.667104ms to wait for apiserver process to appear ...
	I1122 00:33:23.569291  266241 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:33:23.569312  266241 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:33:23.575380  266241 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:33:23.576259  266241 api_server.go:141] control plane version: v1.34.1
	I1122 00:33:23.576289  266241 api_server.go:131] duration metric: took 6.990367ms to wait for apiserver health ...
	I1122 00:33:23.576301  266241 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:33:23.577787  266241 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:33:23.579254  266241 system_pods.go:59] 9 kube-system pods found
	I1122 00:33:23.579295  266241 addons.go:530] duration metric: took 509.676957ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:33:23.579362  266241 system_pods.go:61] "coredns-66bc5c9577-72kgm" [3bcbb420-b262-4fbf-a58a-71256d3fd603] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579402  266241 system_pods.go:61] "coredns-66bc5c9577-bc2kh" [0b3f98b7-386c-4f52-825d-4c30ae7caa9f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579414  266241 system_pods.go:61] "etcd-newest-cni-531189" [04d1cce2-a1f6-4f51-9bcc-8f7080701d1f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:23.579424  266241 system_pods.go:61] "kindnet-2r5vl" [e3ab47c0-fc8c-4b02-8905-b3975fc5fe58] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:23.579433  266241 system_pods.go:61] "kube-apiserver-newest-cni-531189" [f5b06aeb-cb12-4c70-8eb2-4334f77ce4d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:23.579441  266241 system_pods.go:61] "kube-controller-manager-newest-cni-531189" [bdef5fc8-ce47-48eb-9109-2a7505f50fad] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:23.579450  266241 system_pods.go:61] "kube-proxy-x8pr8" [5b238c04-98fa-46db-91e7-73a2ff0cb690] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:23.579458  266241 system_pods.go:61] "kube-scheduler-newest-cni-531189" [61447088-de05-4c9a-88f1-50f0e78aace7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:23.579464  266241 system_pods.go:61] "storage-provisioner" [db3f32ea-4aa1-4ccf-aebb-39d818606a7e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:33:23.579472  266241 system_pods.go:74] duration metric: took 3.16404ms to wait for pod list to return data ...
	I1122 00:33:23.579481  266241 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:33:23.582158  266241 default_sa.go:45] found service account: "default"
	I1122 00:33:23.582211  266241 default_sa.go:55] duration metric: took 2.72223ms for default service account to be created ...
	I1122 00:33:23.582241  266241 kubeadm.go:587] duration metric: took 512.661168ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:33:23.582285  266241 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:33:23.584569  266241 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:33:23.584601  266241 node_conditions.go:123] node cpu capacity is 8
	I1122 00:33:23.584620  266241 node_conditions.go:105] duration metric: took 2.318301ms to run NodePressure ...
	I1122 00:33:23.584634  266241 start.go:242] waiting for startup goroutines ...
	I1122 00:33:23.869233  266241 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-531189" context rescaled to 1 replicas
	I1122 00:33:23.869277  266241 start.go:247] waiting for cluster config update ...
	I1122 00:33:23.869292  266241 start.go:256] writing updated cluster config ...
	I1122 00:33:23.869597  266241 ssh_runner.go:195] Run: rm -f paused
	I1122 00:33:23.918865  266241 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:33:23.921412  266241 out.go:179] * Done! kubectl is now configured to use "newest-cni-531189" cluster and "default" namespace by default
	I1122 00:33:19.740262  271909 out.go:252] * Restarting existing docker container for "embed-certs-084979" ...
	I1122 00:33:19.740347  271909 cli_runner.go:164] Run: docker start embed-certs-084979
	I1122 00:33:20.036119  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:20.055996  271909 kic.go:430] container "embed-certs-084979" state is running.
	I1122 00:33:20.056369  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:20.076119  271909 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/config.json ...
	I1122 00:33:20.076314  271909 machine.go:94] provisionDockerMachine start ...
	I1122 00:33:20.076380  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:20.094365  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:20.094671  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:20.094695  271909 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:33:20.095470  271909 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47732->127.0.0.1:33088: read: connection reset by peer
	I1122 00:33:23.239165  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:33:23.239193  271909 ubuntu.go:182] provisioning hostname "embed-certs-084979"
	I1122 00:33:23.239270  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.267203  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.267504  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.267521  271909 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-084979 && echo "embed-certs-084979" | sudo tee /etc/hostname
	I1122 00:33:23.415022  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-084979
	
	I1122 00:33:23.415186  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.442968  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.443355  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.443381  271909 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-084979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-084979/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-084979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:33:23.578909  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:33:23.578936  271909 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:33:23.578952  271909 ubuntu.go:190] setting up certificates
	I1122 00:33:23.578962  271909 provision.go:84] configureAuth start
	I1122 00:33:23.579012  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:23.600564  271909 provision.go:143] copyHostCerts
	I1122 00:33:23.600616  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:33:23.600629  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:33:23.600689  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:33:23.600788  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:33:23.600797  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:33:23.600823  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:33:23.600891  271909 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:33:23.600898  271909 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:33:23.600930  271909 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:33:23.600994  271909 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.embed-certs-084979 san=[127.0.0.1 192.168.94.2 embed-certs-084979 localhost minikube]
	I1122 00:33:23.641972  271909 provision.go:177] copyRemoteCerts
	I1122 00:33:23.642021  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:33:23.642067  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.659300  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:23.750824  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:33:23.767644  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:33:23.783985  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:33:23.800259  271909 provision.go:87] duration metric: took 221.28208ms to configureAuth
	I1122 00:33:23.800281  271909 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:33:23.800456  271909 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:23.800557  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:23.818857  271909 main.go:143] libmachine: Using SSH client type: native
	I1122 00:33:23.819105  271909 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1122 00:33:23.819122  271909 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:33:24.141432  271909 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:33:24.141463  271909 machine.go:97] duration metric: took 4.065134365s to provisionDockerMachine
	I1122 00:33:24.141478  271909 start.go:293] postStartSetup for "embed-certs-084979" (driver="docker")
	I1122 00:33:24.141490  271909 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:33:24.141548  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:33:24.141620  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.162474  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.255746  271909 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:33:24.259451  271909 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:33:24.259484  271909 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:33:24.259495  271909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:33:24.259540  271909 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:33:24.259640  271909 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:33:24.259760  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:33:24.268087  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:33:24.285792  271909 start.go:296] duration metric: took 144.303637ms for postStartSetup
	I1122 00:33:24.285870  271909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:33:24.285907  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.305129  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.394758  271909 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:33:24.399105  271909 fix.go:56] duration metric: took 4.677573464s for fixHost
	I1122 00:33:24.399135  271909 start.go:83] releasing machines lock for "embed-certs-084979", held for 4.677619645s
	I1122 00:33:24.399190  271909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-084979
	I1122 00:33:24.417429  271909 ssh_runner.go:195] Run: cat /version.json
	I1122 00:33:24.417500  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.417541  271909 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:33:24.417599  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:24.437333  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:24.437447  271909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:33:19.569689  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:19.569722  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:19.587535  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:19.587565  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:19.653838  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:19.653862  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:19.653879  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:19.693182  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:19.693218  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:19.755020  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:19.755049  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.286560  218533 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:33:22.286991  218533 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1122 00:33:22.287042  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:33:22.287123  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:33:22.316409  218533 cri.go:89] found id: "31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:22.316427  218533 cri.go:89] found id: ""
	I1122 00:33:22.316435  218533 logs.go:282] 1 containers: [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b]
	I1122 00:33:22.316480  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.320282  218533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:33:22.320343  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:33:22.345912  218533 cri.go:89] found id: ""
	I1122 00:33:22.345941  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.345950  218533 logs.go:284] No container was found matching "etcd"
	I1122 00:33:22.345956  218533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:33:22.346006  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:33:22.373204  218533 cri.go:89] found id: ""
	I1122 00:33:22.373229  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.373240  218533 logs.go:284] No container was found matching "coredns"
	I1122 00:33:22.373251  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:33:22.373304  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:33:22.399781  218533 cri.go:89] found id: "f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:22.399803  218533 cri.go:89] found id: ""
	I1122 00:33:22.399814  218533 logs.go:282] 1 containers: [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33]
	I1122 00:33:22.399860  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.403445  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:33:22.403493  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:33:22.428150  218533 cri.go:89] found id: ""
	I1122 00:33:22.428172  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.428182  218533 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:33:22.428187  218533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:33:22.428245  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:33:22.453012  218533 cri.go:89] found id: "dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.453038  218533 cri.go:89] found id: ""
	I1122 00:33:22.453048  218533 logs.go:282] 1 containers: [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76]
	I1122 00:33:22.453118  218533 ssh_runner.go:195] Run: which crictl
	I1122 00:33:22.456609  218533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:33:22.456666  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:33:22.481421  218533 cri.go:89] found id: ""
	I1122 00:33:22.481444  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.481452  218533 logs.go:284] No container was found matching "kindnet"
	I1122 00:33:22.481458  218533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:33:22.481507  218533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:33:22.507715  218533 cri.go:89] found id: ""
	I1122 00:33:22.507739  218533 logs.go:282] 0 containers: []
	W1122 00:33:22.507748  218533 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:33:22.507759  218533 logs.go:123] Gathering logs for container status ...
	I1122 00:33:22.507781  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:33:22.544814  218533 logs.go:123] Gathering logs for kubelet ...
	I1122 00:33:22.544846  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:33:22.638926  218533 logs.go:123] Gathering logs for dmesg ...
	I1122 00:33:22.638954  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:33:22.653185  218533 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:33:22.653210  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:33:22.709636  218533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:33:22.709661  218533 logs.go:123] Gathering logs for kube-apiserver [31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b] ...
	I1122 00:33:22.709683  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 31805891c113692793d324f62051c857bb1cbdf7c4889dd16e4c528a240a0a9b"
	I1122 00:33:22.739831  218533 logs.go:123] Gathering logs for kube-scheduler [f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33] ...
	I1122 00:33:22.739858  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f3929e68eba52318bd3ccaa0e7ece1c8e9993d8816b9c3891d843966fee71f33"
	I1122 00:33:22.796990  218533 logs.go:123] Gathering logs for kube-controller-manager [dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76] ...
	I1122 00:33:22.797022  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc92b1709a65ec6599e10e847369b5072bcf0cdafd3a4c895099133103c59b76"
	I1122 00:33:22.822631  218533 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:33:22.822656  218533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:33:24.595412  271909 ssh_runner.go:195] Run: systemctl --version
	I1122 00:33:24.601649  271909 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:33:24.636278  271909 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:33:24.641437  271909 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:33:24.641509  271909 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:33:24.650387  271909 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:33:24.650411  271909 start.go:496] detecting cgroup driver to use...
	I1122 00:33:24.650442  271909 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:33:24.650486  271909 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:33:24.666914  271909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:33:24.680313  271909 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:33:24.680365  271909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:33:24.695245  271909 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:33:24.707722  271909 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:33:24.789382  271909 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:33:24.875529  271909 docker.go:234] disabling docker service ...
	I1122 00:33:24.875589  271909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:33:24.891357  271909 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:33:24.903375  271909 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:33:24.997772  271909 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:33:25.080600  271909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:33:25.093796  271909 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:33:25.107558  271909 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:33:25.107619  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.116664  271909 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:33:25.116717  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.125868  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.134290  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.142707  271909 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:33:25.151418  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.160563  271909 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.169635  271909 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:33:25.179910  271909 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:33:25.187666  271909 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:33:25.194919  271909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:33:25.284428  271909 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:33:25.426466  271909 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:33:25.426530  271909 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:33:25.430949  271909 start.go:564] Will wait 60s for crictl version
	I1122 00:33:25.431007  271909 ssh_runner.go:195] Run: which crictl
	I1122 00:33:25.435209  271909 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:33:25.468380  271909 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:33:25.468463  271909 ssh_runner.go:195] Run: crio --version
	I1122 00:33:25.504554  271909 ssh_runner.go:195] Run: crio --version
	I1122 00:33:25.540868  271909 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:33:25.542040  271909 cli_runner.go:164] Run: docker network inspect embed-certs-084979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:33:25.563396  271909 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:33:25.567970  271909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:33:25.579466  271909 kubeadm.go:884] updating cluster {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:33:25.579673  271909 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:33:25.579756  271909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:33:25.616526  271909 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:33:25.616545  271909 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:33:25.616585  271909 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:33:25.642522  271909 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:33:25.642550  271909 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:33:25.642560  271909 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:33:25.642692  271909 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-084979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:33:25.642787  271909 ssh_runner.go:195] Run: crio config
	I1122 00:33:25.689094  271909 cni.go:84] Creating CNI manager for ""
	I1122 00:33:25.689113  271909 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:33:25.689127  271909 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:33:25.689159  271909 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-084979 NodeName:embed-certs-084979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:33:25.689306  271909 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-084979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:33:25.689371  271909 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:33:25.697936  271909 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:33:25.698013  271909 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:33:25.705997  271909 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:33:25.721032  271909 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:33:25.736774  271909 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1122 00:33:25.752025  271909 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:33:25.756326  271909 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:33:25.767362  271909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:33:25.861885  271909 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:33:25.884466  271909 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979 for IP: 192.168.94.2
	I1122 00:33:25.884494  271909 certs.go:195] generating shared ca certs ...
	I1122 00:33:25.884513  271909 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:25.884670  271909 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:33:25.884724  271909 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:33:25.884745  271909 certs.go:257] generating profile certs ...
	I1122 00:33:25.884874  271909 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/client.key
	I1122 00:33:25.884964  271909 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key.07b0558b
	I1122 00:33:25.885016  271909 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key
	I1122 00:33:25.885182  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:33:25.885228  271909 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:33:25.885244  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:33:25.885295  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:33:25.885325  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:33:25.885358  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:33:25.885411  271909 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:33:25.886191  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:33:25.906135  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:33:25.923788  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:33:25.944469  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:33:25.970464  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:33:25.993473  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:33:26.012448  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:33:26.029836  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/embed-certs-084979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:33:26.049555  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:33:26.068589  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:33:26.086849  271909 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:33:26.104689  271909 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:33:26.116630  271909 ssh_runner.go:195] Run: openssl version
	I1122 00:33:26.122370  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:33:26.130200  271909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:33:26.133613  271909 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:33:26.133658  271909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:33:26.167358  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:33:26.176596  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:33:26.184548  271909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:33:26.187940  271909 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:33:26.187989  271909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:33:26.238160  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:33:26.247211  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:33:26.256445  271909 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:33:26.260666  271909 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:33:26.260721  271909 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:33:26.303941  271909 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:33:26.312950  271909 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:33:26.317210  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:33:26.360266  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:33:26.402000  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:33:26.448502  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:33:26.490278  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:33:26.548927  271909 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:33:26.598437  271909 kubeadm.go:401] StartCluster: {Name:embed-certs-084979 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-084979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:26.598555  271909 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:33:26.598609  271909 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:33:26.633563  271909 cri.go:89] found id: "7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11"
	I1122 00:33:26.633584  271909 cri.go:89] found id: "e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e"
	I1122 00:33:26.633661  271909 cri.go:89] found id: "b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd"
	I1122 00:33:26.633667  271909 cri.go:89] found id: "551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39"
	I1122 00:33:26.633671  271909 cri.go:89] found id: ""
	I1122 00:33:26.633748  271909 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:33:26.648676  271909 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:26Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:33:26.648776  271909 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:33:26.657348  271909 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:33:26.657364  271909 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:33:26.657406  271909 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:33:26.665189  271909 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:33:26.666177  271909 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-084979" does not appear in /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:26.666815  271909 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-9122/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-084979" cluster setting kubeconfig missing "embed-certs-084979" context setting]
	I1122 00:33:26.667748  271909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:26.669849  271909 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:33:26.679128  271909 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1122 00:33:26.679159  271909 kubeadm.go:602] duration metric: took 21.788533ms to restartPrimaryControlPlane
	I1122 00:33:26.679169  271909 kubeadm.go:403] duration metric: took 80.740597ms to StartCluster
	I1122 00:33:26.679185  271909 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:26.679246  271909 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:26.681327  271909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:33:26.681584  271909 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:33:26.681781  271909 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:33:26.681907  271909 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-084979"
	I1122 00:33:26.681928  271909 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-084979"
	I1122 00:33:26.681927  271909 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1122 00:33:26.681940  271909 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:33:26.681971  271909 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:33:26.681989  271909 addons.go:70] Setting dashboard=true in profile "embed-certs-084979"
	I1122 00:33:26.682011  271909 addons.go:239] Setting addon dashboard=true in "embed-certs-084979"
	W1122 00:33:26.682023  271909 addons.go:248] addon dashboard should already be in state true
	I1122 00:33:26.682094  271909 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:33:26.682117  271909 addons.go:70] Setting default-storageclass=true in profile "embed-certs-084979"
	I1122 00:33:26.682145  271909 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-084979"
	I1122 00:33:26.682435  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:26.682487  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:26.682610  271909 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:33:26.686656  271909 out.go:179] * Verifying Kubernetes components...
	I1122 00:33:26.687835  271909 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:33:26.709274  271909 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:33:26.711233  271909 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:33:26.711469  271909 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:33:26.711647  271909 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:33:26.711704  271909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:33:26.714109  271909 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 22 00:33:15 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:15.37410101Z" level=info msg="Starting container: 242b67bcf1e1872970df913cdfaa56cd7f963c144f5153f1474ce203ef8bdfac" id=4fb4895c-bbfc-4f33-9640-0b67cf4c007d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:15 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:15.375978895Z" level=info msg="Started container" PID=1842 containerID=242b67bcf1e1872970df913cdfaa56cd7f963c144f5153f1474ce203ef8bdfac description=kube-system/coredns-66bc5c9577-np5nq/coredns id=4fb4895c-bbfc-4f33-9640-0b67cf4c007d name=/runtime.v1.RuntimeService/StartContainer sandboxID=10a549d29acf986211c4fb5c3a2721d530fcf6081b66841fd155a4b95ed18384
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.508323504Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3c008309-ba91-4f32-8e06-80f8f2f4c59d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.508420897Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.513729576Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:678542a80bba839b96637f3d1d8e1a4757da8a9ef31dad944a3e3139d65b3b8d UID:865d2f9a-32be-473d-8149-08e560d58cdf NetNS:/var/run/netns/e2423126-bdc3-40a1-be3b-e597b2eec431 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a948}] Aliases:map[]}"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.513754606Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.523712063Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:678542a80bba839b96637f3d1d8e1a4757da8a9ef31dad944a3e3139d65b3b8d UID:865d2f9a-32be-473d-8149-08e560d58cdf NetNS:/var/run/netns/e2423126-bdc3-40a1-be3b-e597b2eec431 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a948}] Aliases:map[]}"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.523824682Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.524528273Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.525271925Z" level=info msg="Ran pod sandbox 678542a80bba839b96637f3d1d8e1a4757da8a9ef31dad944a3e3139d65b3b8d with infra container: default/busybox/POD" id=3c008309-ba91-4f32-8e06-80f8f2f4c59d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.526496245Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=59efbc11-d1f4-486a-b820-70f987111a42 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.526612658Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=59efbc11-d1f4-486a-b820-70f987111a42 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.526659821Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=59efbc11-d1f4-486a-b820-70f987111a42 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.527420543Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56aa3fc5-7a82-4b42-8733-16c5c181205b name=/runtime.v1.ImageService/PullImage
	Nov 22 00:33:18 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:18.529337418Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.151259863Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=56aa3fc5-7a82-4b42-8733-16c5c181205b name=/runtime.v1.ImageService/PullImage
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.152025505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=77eee303-b481-4f38-b987-9abfdecd5fc6 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.15356803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=861ebde2-6712-4fe5-b58b-e52d07a2b477 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.158440116Z" level=info msg="Creating container: default/busybox/busybox" id=1ae843b8-05f8-48aa-84a5-a6656ff2a301 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.158586771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.163681997Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.164299745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.203930052Z" level=info msg="Created container aafc89e24359ca66c09f675c3dc9f30dd9705f6e69c45847cef57d52b3162770: default/busybox/busybox" id=1ae843b8-05f8-48aa-84a5-a6656ff2a301 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.204634666Z" level=info msg="Starting container: aafc89e24359ca66c09f675c3dc9f30dd9705f6e69c45847cef57d52b3162770" id=c57371ca-16c4-44bc-adc0-28a8f045420d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:19 default-k8s-diff-port-046175 crio[778]: time="2025-11-22T00:33:19.206756543Z" level=info msg="Started container" PID=1917 containerID=aafc89e24359ca66c09f675c3dc9f30dd9705f6e69c45847cef57d52b3162770 description=default/busybox/busybox id=c57371ca-16c4-44bc-adc0-28a8f045420d name=/runtime.v1.RuntimeService/StartContainer sandboxID=678542a80bba839b96637f3d1d8e1a4757da8a9ef31dad944a3e3139d65b3b8d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	aafc89e24359c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   678542a80bba8       busybox                                                default
	242b67bcf1e18       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   10a549d29acf9       coredns-66bc5c9577-np5nq                               kube-system
	de657ca644b43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   37665104f8432       storage-provisioner                                    kube-system
	779365572a79a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   58e6560e04d59       kindnet-nqk28                                          kube-system
	4f9c8367fbce5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   aa418513843a0       kube-proxy-jdzcl                                       kube-system
	1d41d06635e19       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   ce2c8a66d7426       kube-scheduler-default-k8s-diff-port-046175            kube-system
	c15f0a221b49b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   e1f3df457918e       kube-apiserver-default-k8s-diff-port-046175            kube-system
	dc0aa37c4f0df       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   a173cbd729553       kube-controller-manager-default-k8s-diff-port-046175   kube-system
	eb2beecb03318       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   54877b62937e1       etcd-default-k8s-diff-port-046175                      kube-system
	
	
	==> coredns [242b67bcf1e1872970df913cdfaa56cd7f963c144f5153f1474ce203ef8bdfac] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38925 - 51202 "HINFO IN 4926462553429196244.6400996012518562815. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090239313s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-046175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-046175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-046175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-046175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:33:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:15 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:15 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:15 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:33:15 +0000   Sat, 22 Nov 2025 00:33:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-046175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                cef49250-3102-457d-90bd-87a6df160389
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-np5nq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-046175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-nqk28                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-046175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-046175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-jdzcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-046175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-046175 event: Registered Node default-k8s-diff-port-046175 in Controller
	  Normal  NodeReady                12s                kubelet          Node default-k8s-diff-port-046175 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [eb2beecb03318bcb4300a5e246ed4c99606c24a5cefe0dfc00cd4e7445d60c7a] <==
	{"level":"warn","ts":"2025-11-22T00:32:58.973040Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:32:58.658317Z","time spent":"314.556985ms","remote":"127.0.0.1:53324","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4861,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-046175\" mod_revision:213 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-046175\" value_size:4807 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-046175\" > >"}
	{"level":"info","ts":"2025-11-22T00:32:59.112803Z","caller":"traceutil/trace.go:172","msg":"trace[1727389604] linearizableReadLoop","detail":"{readStateIndex:296; appliedIndex:296; }","duration":"134.745657ms","start":"2025-11-22T00:32:58.978035Z","end":"2025-11-22T00:32:59.112781Z","steps":["trace[1727389604] 'read index received'  (duration: 134.739804ms)","trace[1727389604] 'applied index is now lower than readState.Index'  (duration: 4.65µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:32:59.136280Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.225534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-22T00:32:59.136334Z","caller":"traceutil/trace.go:172","msg":"trace[991560052] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:288; }","duration":"158.299919ms","start":"2025-11-22T00:32:58.978025Z","end":"2025-11-22T00:32:59.136325Z","steps":["trace[991560052] 'agreement among raft nodes before linearized reading'  (duration: 134.839039ms)","trace[991560052] 'range keys from in-memory index tree'  (duration: 23.278828ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:32:59.136275Z","caller":"traceutil/trace.go:172","msg":"trace[164081522] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"158.441769ms","start":"2025-11-22T00:32:58.977815Z","end":"2025-11-22T00:32:59.136257Z","steps":["trace[164081522] 'process raft request'  (duration: 135.028262ms)","trace[164081522] 'compare'  (duration: 23.316384ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:32:59.437833Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.898608ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221108934555 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/specs/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kube-system/kube-dns\" value_size:1143 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:32:59.437998Z","caller":"traceutil/trace.go:172","msg":"trace[1709608128] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"299.211151ms","start":"2025-11-22T00:32:59.138767Z","end":"2025-11-22T00:32:59.437979Z","steps":["trace[1709608128] 'process raft request'  (duration: 299.13257ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:59.438035Z","caller":"traceutil/trace.go:172","msg":"trace[54597771] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"300.033273ms","start":"2025-11-22T00:32:59.137985Z","end":"2025-11-22T00:32:59.438019Z","steps":["trace[54597771] 'process raft request'  (duration: 124.909629ms)","trace[54597771] 'compare'  (duration: 174.789324ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:32:59.438085Z","caller":"traceutil/trace.go:172","msg":"trace[16022978] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"263.468854ms","start":"2025-11-22T00:32:59.174605Z","end":"2025-11-22T00:32:59.438074Z","steps":["trace[16022978] 'process raft request'  (duration: 263.323257ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:32:59.438124Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:32:59.137964Z","time spent":"300.127304ms","remote":"127.0.0.1:53346","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1196,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/specs/kube-system/kube-dns\" mod_revision:0 > success:<request_put:<key:\"/registry/services/specs/kube-system/kube-dns\" value_size:1143 >> failure:<>"}
	{"level":"info","ts":"2025-11-22T00:32:59.481356Z","caller":"traceutil/trace.go:172","msg":"trace[1458129493] transaction","detail":"{read_only:false; number_of_response:0; response_revision:292; }","duration":"117.690392ms","start":"2025-11-22T00:32:59.363654Z","end":"2025-11-22T00:32:59.481344Z","steps":["trace[1458129493] 'process raft request'  (duration: 117.66778ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:59.481404Z","caller":"traceutil/trace.go:172","msg":"trace[1289300688] transaction","detail":"{read_only:false; number_of_response:0; response_revision:292; }","duration":"117.733557ms","start":"2025-11-22T00:32:59.363654Z","end":"2025-11-22T00:32:59.481387Z","steps":["trace[1289300688] 'process raft request'  (duration: 117.599896ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:59.510441Z","caller":"traceutil/trace.go:172","msg":"trace[57320397] transaction","detail":"{read_only:false; number_of_response:0; response_revision:292; }","duration":"146.644634ms","start":"2025-11-22T00:32:59.363781Z","end":"2025-11-22T00:32:59.510426Z","steps":["trace[57320397] 'process raft request'  (duration: 146.502329ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:59.510484Z","caller":"traceutil/trace.go:172","msg":"trace[1784056463] transaction","detail":"{read_only:false; number_of_response:0; response_revision:292; }","duration":"146.595126ms","start":"2025-11-22T00:32:59.363873Z","end":"2025-11-22T00:32:59.510468Z","steps":["trace[1784056463] 'process raft request'  (duration: 146.468464ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:32:59.864760Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.856808ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221108934576 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" mod_revision:277 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" value_size:7851 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:32:59.864836Z","caller":"traceutil/trace.go:172","msg":"trace[185692294] transaction","detail":"{read_only:false; response_revision:298; number_of_response:1; }","duration":"343.566929ms","start":"2025-11-22T00:32:59.521255Z","end":"2025-11-22T00:32:59.864822Z","steps":["trace[185692294] 'process raft request'  (duration: 120.593606ms)","trace[185692294] 'compare'  (duration: 222.739792ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:32:59.864910Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:32:59.521242Z","time spent":"343.633203ms","remote":"127.0.0.1:53342","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7929,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" mod_revision:277 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" value_size:7851 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-046175\" > >"}
	{"level":"info","ts":"2025-11-22T00:32:59.865842Z","caller":"traceutil/trace.go:172","msg":"trace[886854501] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"343.296937ms","start":"2025-11-22T00:32:59.522531Z","end":"2025-11-22T00:32:59.865828Z","steps":["trace[886854501] 'process raft request'  (duration: 343.252954ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:32:59.865913Z","caller":"traceutil/trace.go:172","msg":"trace[1887776] transaction","detail":"{read_only:false; response_revision:299; number_of_response:1; }","duration":"343.439077ms","start":"2025-11-22T00:32:59.522441Z","end":"2025-11-22T00:32:59.865881Z","steps":["trace[1887776] 'process raft request'  (duration: 343.268437ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:32:59.866562Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:32:59.522521Z","time spent":"343.855218ms","remote":"127.0.0.1:53712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":420,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubeadm:node-proxier\" value_size:362 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:32:59.866622Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:32:59.522430Z","time spent":"344.152168ms","remote":"127.0.0.1:53362","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":192,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" value_size:126 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:33:00.137786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.472134ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-046175\" limit:1 ","response":"range_response_count:1 size:7347"}
	{"level":"info","ts":"2025-11-22T00:33:00.137932Z","caller":"traceutil/trace.go:172","msg":"trace[679351465] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-046175; range_end:; response_count:1; response_revision:306; }","duration":"171.634906ms","start":"2025-11-22T00:32:59.966281Z","end":"2025-11-22T00:33:00.137916Z","steps":["trace[679351465] 'agreement among raft nodes before linearized reading'  (duration: 17.917685ms)","trace[679351465] 'range keys from in-memory index tree'  (duration: 153.504958ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:00.137894Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.630226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221108934594 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:33:00.138594Z","caller":"traceutil/trace.go:172","msg":"trace[1469430430] transaction","detail":"{read_only:false; response_revision:307; number_of_response:1; }","duration":"172.441009ms","start":"2025-11-22T00:32:59.966133Z","end":"2025-11-22T00:33:00.138574Z","steps":["trace[1469430430] 'process raft request'  (duration: 18.078117ms)","trace[1469430430] 'compare'  (duration: 153.41984ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:33:27 up  1:15,  0 user,  load average: 2.94, 2.93, 1.89
	Linux default-k8s-diff-port-046175 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [779365572a79a62fed8b3397ef663e36509d46824f1c12d644db7064cb8bdcd9] <==
	I1122 00:33:04.585530       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:04.585787       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:33:04.585934       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:04.585952       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:04.585976       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:04.881497       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:04.881606       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:04.881630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:04.978623       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:05.277508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:05.277538       1 metrics.go:72] Registering metrics
	I1122 00:33:05.277622       1 controller.go:711] "Syncing nftables rules"
	I1122 00:33:14.885117       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:33:14.885186       1 main.go:301] handling current node
	I1122 00:33:24.885132       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:33:24.885178       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c15f0a221b49b20eb30b472a29367629dad22698c740c13b18ec82bd97818dbd] <==
	I1122 00:32:55.851242       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:32:55.852719       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:32:55.856314       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:32:55.856512       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:32:55.863042       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:32:55.876351       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:32:55.882203       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:32:56.754952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:32:56.758800       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:32:56.758820       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:32:57.228027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:32:57.268468       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:32:57.361829       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:32:57.371107       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:32:57.372483       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:32:57.378374       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:32:57.790129       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:32:58.656049       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:32:59.438513       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:32:59.515330       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:33:03.442290       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:33:03.542463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:03.792176       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:33:03.796625       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1122 00:33:26.308848       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:41598: use of closed network connection
	
	
	==> kube-controller-manager [dc0aa37c4f0df1c669a12e10c5733e108cf82c22e482117d84609015f42304e2] <==
	I1122 00:33:02.775329       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:33:02.775412       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-046175"
	I1122 00:33:02.775474       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:33:02.789479       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:33:02.789492       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:33:02.789547       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:33:02.789551       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:33:02.789566       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:33:02.789572       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:33:02.789593       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:33:02.790742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:33:02.790768       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:33:02.790845       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:33:02.790848       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:33:02.793097       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:33:02.794200       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:02.798823       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:02.800951       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:33:02.805175       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:33:02.807416       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:33:02.813726       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:33:02.817900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:33:02.817917       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:33:02.817925       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:33:17.778232       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f9c8367fbce54bdc476c6e72e57d3baa8b24c4b62f875b01d249f4e6a90c90c] <==
	I1122 00:33:04.466959       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:04.526366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:04.627187       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:04.627229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:33:04.627349       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:04.648980       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:04.649040       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:04.655389       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:04.655783       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:04.655808       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:04.658624       1 config.go:200] "Starting service config controller"
	I1122 00:33:04.658672       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:04.658698       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:04.658704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:04.658979       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:04.659273       1 config.go:309] "Starting node config controller"
	I1122 00:33:04.659316       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:04.659424       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:04.659485       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:04.759107       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:04.760235       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:33:04.760353       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d41d06635e195664edd63e1608c8091867787e7a31d99f858686b1a8577542b] <==
	E1122 00:32:55.823851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:32:55.824115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:32:55.824210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:32:55.824106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:32:55.824217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:32:55.824368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:32:55.824387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:32:55.824406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:32:55.824404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:32:55.824480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:32:55.824505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:32:55.824582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:32:56.701273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:32:56.776752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:32:56.787979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:32:56.836836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:32:56.850031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:32:56.888022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:32:56.903321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:32:56.985778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:32:56.991871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:32:57.018268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:32:57.048767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:32:57.058832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1122 00:32:59.220784       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.469087    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f20a454d-e357-46bf-803b-f0166329db1a-lib-modules\") pod \"kube-proxy-jdzcl\" (UID: \"f20a454d-e357-46bf-803b-f0166329db1a\") " pod="kube-system/kube-proxy-jdzcl"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.469139    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjgvp\" (UniqueName: \"kubernetes.io/projected/f20a454d-e357-46bf-803b-f0166329db1a-kube-api-access-sjgvp\") pod \"kube-proxy-jdzcl\" (UID: \"f20a454d-e357-46bf-803b-f0166329db1a\") " pod="kube-system/kube-proxy-jdzcl"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.469248    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f20a454d-e357-46bf-803b-f0166329db1a-kube-proxy\") pod \"kube-proxy-jdzcl\" (UID: \"f20a454d-e357-46bf-803b-f0166329db1a\") " pod="kube-system/kube-proxy-jdzcl"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.469290    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f20a454d-e357-46bf-803b-f0166329db1a-xtables-lock\") pod \"kube-proxy-jdzcl\" (UID: \"f20a454d-e357-46bf-803b-f0166329db1a\") " pod="kube-system/kube-proxy-jdzcl"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.569509    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-xtables-lock\") pod \"kindnet-nqk28\" (UID: \"fd6ece46-cf0c-4d24-8859-aaf670c70fb5\") " pod="kube-system/kindnet-nqk28"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.569570    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-lib-modules\") pod \"kindnet-nqk28\" (UID: \"fd6ece46-cf0c-4d24-8859-aaf670c70fb5\") " pod="kube-system/kindnet-nqk28"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.569696    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggk7v\" (UniqueName: \"kubernetes.io/projected/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-kube-api-access-ggk7v\") pod \"kindnet-nqk28\" (UID: \"fd6ece46-cf0c-4d24-8859-aaf670c70fb5\") " pod="kube-system/kindnet-nqk28"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:03.569839    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-cni-cfg\") pod \"kindnet-nqk28\" (UID: \"fd6ece46-cf0c-4d24-8859-aaf670c70fb5\") " pod="kube-system/kindnet-nqk28"
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.575073    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.575105    1314 projected.go:196] Error preparing data for projected volume kube-api-access-sjgvp for pod kube-system/kube-proxy-jdzcl: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.575187    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f20a454d-e357-46bf-803b-f0166329db1a-kube-api-access-sjgvp podName:f20a454d-e357-46bf-803b-f0166329db1a nodeName:}" failed. No retries permitted until 2025-11-22 00:33:04.075160375 +0000 UTC m=+5.806291242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sjgvp" (UniqueName: "kubernetes.io/projected/f20a454d-e357-46bf-803b-f0166329db1a-kube-api-access-sjgvp") pod "kube-proxy-jdzcl" (UID: "f20a454d-e357-46bf-803b-f0166329db1a") : configmap "kube-root-ca.crt" not found
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.677569    1314 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.677598    1314 projected.go:196] Error preparing data for projected volume kube-api-access-ggk7v for pod kube-system/kindnet-nqk28: configmap "kube-root-ca.crt" not found
	Nov 22 00:33:03 default-k8s-diff-port-046175 kubelet[1314]: E1122 00:33:03.677652    1314 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-kube-api-access-ggk7v podName:fd6ece46-cf0c-4d24-8859-aaf670c70fb5 nodeName:}" failed. No retries permitted until 2025-11-22 00:33:04.177633497 +0000 UTC m=+5.908764358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ggk7v" (UniqueName: "kubernetes.io/projected/fd6ece46-cf0c-4d24-8859-aaf670c70fb5-kube-api-access-ggk7v") pod "kindnet-nqk28" (UID: "fd6ece46-cf0c-4d24-8859-aaf670c70fb5") : configmap "kube-root-ca.crt" not found
	Nov 22 00:33:05 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:05.396526    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nqk28" podStartSLOduration=2.396505075 podStartE2EDuration="2.396505075s" podCreationTimestamp="2025-11-22 00:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:05.387306409 +0000 UTC m=+7.118437305" watchObservedRunningTime="2025-11-22 00:33:05.396505075 +0000 UTC m=+7.127635947"
	Nov 22 00:33:06 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:06.181620    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jdzcl" podStartSLOduration=3.181597502 podStartE2EDuration="3.181597502s" podCreationTimestamp="2025-11-22 00:33:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:05.396803987 +0000 UTC m=+7.127934857" watchObservedRunningTime="2025-11-22 00:33:06.181597502 +0000 UTC m=+7.912728374"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.000165    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.041147    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bf2527b-42f1-42dd-980e-e1006db2273d-config-volume\") pod \"coredns-66bc5c9577-np5nq\" (UID: \"6bf2527b-42f1-42dd-980e-e1006db2273d\") " pod="kube-system/coredns-66bc5c9577-np5nq"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.041196    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5f32ba19-162c-4893-a387-2c8b492c1b6a-tmp\") pod \"storage-provisioner\" (UID: \"5f32ba19-162c-4893-a387-2c8b492c1b6a\") " pod="kube-system/storage-provisioner"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.041219    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9nl9\" (UniqueName: \"kubernetes.io/projected/6bf2527b-42f1-42dd-980e-e1006db2273d-kube-api-access-c9nl9\") pod \"coredns-66bc5c9577-np5nq\" (UID: \"6bf2527b-42f1-42dd-980e-e1006db2273d\") " pod="kube-system/coredns-66bc5c9577-np5nq"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.041245    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcdr\" (UniqueName: \"kubernetes.io/projected/5f32ba19-162c-4893-a387-2c8b492c1b6a-kube-api-access-sjcdr\") pod \"storage-provisioner\" (UID: \"5f32ba19-162c-4893-a387-2c8b492c1b6a\") " pod="kube-system/storage-provisioner"
	Nov 22 00:33:15 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:15.416775    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-np5nq" podStartSLOduration=11.416754616 podStartE2EDuration="11.416754616s" podCreationTimestamp="2025-11-22 00:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:15.407387565 +0000 UTC m=+17.138518434" watchObservedRunningTime="2025-11-22 00:33:15.416754616 +0000 UTC m=+17.147885488"
	Nov 22 00:33:16 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:16.412743    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.412720382 podStartE2EDuration="12.412720382s" podCreationTimestamp="2025-11-22 00:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:33:15.417069705 +0000 UTC m=+17.148200568" watchObservedRunningTime="2025-11-22 00:33:16.412720382 +0000 UTC m=+18.143851253"
	Nov 22 00:33:18 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:18.262206    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxxfb\" (UniqueName: \"kubernetes.io/projected/865d2f9a-32be-473d-8149-08e560d58cdf-kube-api-access-qxxfb\") pod \"busybox\" (UID: \"865d2f9a-32be-473d-8149-08e560d58cdf\") " pod="default/busybox"
	Nov 22 00:33:19 default-k8s-diff-port-046175 kubelet[1314]: I1122 00:33:19.423718    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.797697498 podStartE2EDuration="1.42369775s" podCreationTimestamp="2025-11-22 00:33:18 +0000 UTC" firstStartedPulling="2025-11-22 00:33:18.52692132 +0000 UTC m=+20.258052175" lastFinishedPulling="2025-11-22 00:33:19.15292157 +0000 UTC m=+20.884052427" observedRunningTime="2025-11-22 00:33:19.423490916 +0000 UTC m=+21.154621786" watchObservedRunningTime="2025-11-22 00:33:19.42369775 +0000 UTC m=+21.154828621"
	
	
	==> storage-provisioner [de657ca644b438851bb3963a774d8234ed2cdf4343a84f5bcef0570d148c1892] <==
	I1122 00:33:15.381356       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:33:15.389430       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:33:15.389471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:33:15.391362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:15.395863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:33:15.396111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:33:15.396377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_78606433-0adb-472c-a9ab-3ff13b540467!
	I1122 00:33:15.396372       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f83cc8c2-c96e-4e26-a62a-1dc1f2279333", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-046175_78606433-0adb-472c-a9ab-3ff13b540467 became leader
	W1122 00:33:15.399117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:15.403543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:33:15.497555       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_78606433-0adb-472c-a9ab-3ff13b540467!
	W1122 00:33:17.407155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:17.413615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:19.416710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:19.424510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:21.427011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:21.430531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:23.434597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:23.439152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:25.443507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:25.451724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:27.455830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:33:27.462736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-531189 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-531189 --alsologtostderr -v=1: exit status 80 (2.431150314s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-531189 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:33:44.963476  280256 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:44.963699  280256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:44.963712  280256 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:44.963719  280256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:44.964007  280256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:33:44.964349  280256 out.go:368] Setting JSON to false
	I1122 00:33:44.964378  280256 mustload.go:66] Loading cluster: newest-cni-531189
	I1122 00:33:44.964915  280256 config.go:182] Loaded profile config "newest-cni-531189": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:44.965382  280256 cli_runner.go:164] Run: docker container inspect newest-cni-531189 --format={{.State.Status}}
	I1122 00:33:44.985681  280256 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:44.985997  280256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:45.080899  280256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:80 SystemTime:2025-11-22 00:33:45.067007576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:45.081754  280256 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-531189 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:33:45.083524  280256 out.go:179] * Pausing node newest-cni-531189 ... 
	I1122 00:33:45.084872  280256 host.go:66] Checking if "newest-cni-531189" exists ...
	I1122 00:33:45.085235  280256 ssh_runner.go:195] Run: systemctl --version
	I1122 00:33:45.085281  280256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531189
	I1122 00:33:45.107651  280256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/newest-cni-531189/id_rsa Username:docker}
	I1122 00:33:45.212169  280256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:45.227173  280256 pause.go:52] kubelet running: true
	I1122 00:33:45.227232  280256 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:33:45.374164  280256 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:33:45.374243  280256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:33:45.469940  280256 cri.go:89] found id: "76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f"
	I1122 00:33:45.470223  280256 cri.go:89] found id: "edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234"
	I1122 00:33:45.470238  280256 cri.go:89] found id: "5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4"
	I1122 00:33:45.470244  280256 cri.go:89] found id: "1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e"
	I1122 00:33:45.470249  280256 cri.go:89] found id: "fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2"
	I1122 00:33:45.470254  280256 cri.go:89] found id: "2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae"
	I1122 00:33:45.470284  280256 cri.go:89] found id: ""
	I1122 00:33:45.470330  280256 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:33:45.486473  280256 retry.go:31] will retry after 320.095368ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:45Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:33:45.806998  280256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:45.827976  280256 pause.go:52] kubelet running: false
	I1122 00:33:45.828043  280256 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:33:45.963677  280256 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:33:45.963759  280256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:33:46.049082  280256 cri.go:89] found id: "76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f"
	I1122 00:33:46.049105  280256 cri.go:89] found id: "edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234"
	I1122 00:33:46.049112  280256 cri.go:89] found id: "5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4"
	I1122 00:33:46.049118  280256 cri.go:89] found id: "1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e"
	I1122 00:33:46.049122  280256 cri.go:89] found id: "fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2"
	I1122 00:33:46.049126  280256 cri.go:89] found id: "2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae"
	I1122 00:33:46.049130  280256 cri.go:89] found id: ""
	I1122 00:33:46.049180  280256 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:33:46.061389  280256 retry.go:31] will retry after 478.763419ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:46Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:33:46.541161  280256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:46.554481  280256 pause.go:52] kubelet running: false
	I1122 00:33:46.554565  280256 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:33:46.667563  280256 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:33:46.667634  280256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:33:46.750858  280256 cri.go:89] found id: "76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f"
	I1122 00:33:46.750882  280256 cri.go:89] found id: "edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234"
	I1122 00:33:46.750887  280256 cri.go:89] found id: "5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4"
	I1122 00:33:46.750890  280256 cri.go:89] found id: "1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e"
	I1122 00:33:46.750893  280256 cri.go:89] found id: "fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2"
	I1122 00:33:46.750896  280256 cri.go:89] found id: "2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae"
	I1122 00:33:46.750899  280256 cri.go:89] found id: ""
	I1122 00:33:46.750936  280256 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:33:46.762816  280256 retry.go:31] will retry after 352.323875ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:46Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:33:47.115385  280256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:47.128113  280256 pause.go:52] kubelet running: false
	I1122 00:33:47.128170  280256 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:33:47.228929  280256 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:33:47.229019  280256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:33:47.290988  280256 cri.go:89] found id: "76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f"
	I1122 00:33:47.291017  280256 cri.go:89] found id: "edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234"
	I1122 00:33:47.291021  280256 cri.go:89] found id: "5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4"
	I1122 00:33:47.291024  280256 cri.go:89] found id: "1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e"
	I1122 00:33:47.291027  280256 cri.go:89] found id: "fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2"
	I1122 00:33:47.291030  280256 cri.go:89] found id: "2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae"
	I1122 00:33:47.291034  280256 cri.go:89] found id: ""
	I1122 00:33:47.291092  280256 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:33:47.304888  280256 out.go:203] 
	W1122 00:33:47.306112  280256 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:33:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:33:47.306129  280256 out.go:285] * 
	* 
	W1122 00:33:47.310039  280256 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:33:47.311173  280256 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-531189 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531189
helpers_test.go:243: (dbg) docker inspect newest-cni-531189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	        "Created": "2025-11-22T00:33:00.30734986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:34.537070411Z",
	            "FinishedAt": "2025-11-22T00:33:33.524111057Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hosts",
	        "LogPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b-json.log",
	        "Name": "/newest-cni-531189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-531189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	                "LowerDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531189",
	                "Source": "/var/lib/docker/volumes/newest-cni-531189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531189",
	                "name.minikube.sigs.k8s.io": "newest-cni-531189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d7d07feab84663aa811c503b1cbd466a59e9e10ec57ecb7fd201fa9d8c8de344",
	            "SandboxKey": "/var/run/docker/netns/d7d07feab846",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8187a3a6ebbb0612319b5aa920a8b27ea6d7a8c6a1abed3774766a0afd701a8",
	                    "EndpointID": "2878be71a4175ed09eff22ce43be42f7c76ceb44521488e5ec1f8ab6a9031a84",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "02:79:d9:20:99:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531189",
	                        "65c93ca66378"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189: exit status 2 (321.405619ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531189 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:33:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:33:45.421187  280462 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:45.421310  280462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:45.421317  280462 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:45.421324  280462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:45.421645  280462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:33:45.422250  280462 out.go:368] Setting JSON to false
	I1122 00:33:45.423777  280462 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4574,"bootTime":1763767051,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:33:45.423854  280462 start.go:143] virtualization: kvm guest
	I1122 00:33:45.425693  280462 out.go:179] * [default-k8s-diff-port-046175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:33:45.426785  280462 notify.go:221] Checking for updates...
	I1122 00:33:45.426865  280462 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:33:45.428105  280462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:33:45.429792  280462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:45.430894  280462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:33:45.434297  280462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:33:45.435552  280462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:45.437246  280462 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:45.438085  280462 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:33:45.474515  280462 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:33:45.474618  280462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:45.543961  280462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:45.532674231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:45.544120  280462 docker.go:319] overlay module found
	I1122 00:33:45.545587  280462 out.go:179] * Using the docker driver based on existing profile
	I1122 00:33:45.546567  280462 start.go:309] selected driver: docker
	I1122 00:33:45.546585  280462 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:45.546691  280462 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:33:45.547441  280462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:45.615649  280462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:45.605134567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:45.616021  280462 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:45.616077  280462 cni.go:84] Creating CNI manager for ""
	I1122 00:33:45.616151  280462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:33:45.616236  280462 start.go:353] cluster config:
	{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:45.618824  280462 out.go:179] * Starting "default-k8s-diff-port-046175" primary control-plane node in "default-k8s-diff-port-046175" cluster
	I1122 00:33:45.619978  280462 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:33:45.621216  280462 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:33:45.622257  280462 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:33:45.622296  280462 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:33:45.622312  280462 cache.go:65] Caching tarball of preloaded images
	I1122 00:33:45.622391  280462 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:33:45.622424  280462 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:33:45.622442  280462 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:33:45.622571  280462 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:33:45.645602  280462 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:33:45.645631  280462 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:33:45.645654  280462 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:33:45.645685  280462 start.go:360] acquireMachinesLock for default-k8s-diff-port-046175: {Name:mkead8b34d9557aba416ceaab7176eb30fd80326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:33:45.645745  280462 start.go:364] duration metric: took 38.777µs to acquireMachinesLock for "default-k8s-diff-port-046175"
	I1122 00:33:45.645764  280462 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:33:45.645771  280462 fix.go:54] fixHost starting: 
	I1122 00:33:45.646065  280462 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:33:45.665218  280462 fix.go:112] recreateIfNeeded on default-k8s-diff-port-046175: state=Stopped err=<nil>
	W1122 00:33:45.665261  280462 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.192198813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.194701326Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=99234de4-7d11-480f-b8a8-67397c927cb7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.195305436Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7d7dd9ef-8064-4c2c-a279-564c79b17166 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.196394959Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.196909284Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.197304125Z" level=info msg="Ran pod sandbox 70ea8c5dc14cfab5e48d286779f08935e976f15cbce3b93a2940d3c409539f0a with infra container: kube-system/kube-proxy-x8pr8/POD" id=99234de4-7d11-480f-b8a8-67397c927cb7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.197777056Z" level=info msg="Ran pod sandbox c68f20e83b5b0fe88a42fb702b5dfb2c6119b79e9e4fa88cb4f3e39549c0d34d with infra container: kube-system/kindnet-2r5vl/POD" id=7d7dd9ef-8064-4c2c-a279-564c79b17166 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.198300636Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=428486c6-6981-4d73-81ab-f54dd6a8560d name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.198793555Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=258d66c3-6c71-42a4-883f-e980dd207069 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.199188798Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=16a09e26-c384-4bb7-a0e1-b345a2f0fe6c name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.199597008Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f3371a60-8ae9-4929-935e-9b0f5938f52f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200165118Z" level=info msg="Creating container: kube-system/kube-proxy-x8pr8/kube-proxy" id=5872eff7-9f04-4328-96ac-4afb8e0df8df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.20030364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200618132Z" level=info msg="Creating container: kube-system/kindnet-2r5vl/kindnet-cni" id=349adfa4-9f20-4d6e-b263-30697f2e8e92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200685445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.206234495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.206885207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.208753973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.209985904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.237227631Z" level=info msg="Created container 76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f: kube-system/kindnet-2r5vl/kindnet-cni" id=349adfa4-9f20-4d6e-b263-30697f2e8e92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.237778576Z" level=info msg="Starting container: 76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f" id=139a1bf8-c1a8-4b8b-aea4-e54c08805e3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.239411149Z" level=info msg="Started container" PID=1048 containerID=76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f description=kube-system/kindnet-2r5vl/kindnet-cni id=139a1bf8-c1a8-4b8b-aea4-e54c08805e3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c68f20e83b5b0fe88a42fb702b5dfb2c6119b79e9e4fa88cb4f3e39549c0d34d
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.242330625Z" level=info msg="Created container edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234: kube-system/kube-proxy-x8pr8/kube-proxy" id=5872eff7-9f04-4328-96ac-4afb8e0df8df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.242753238Z" level=info msg="Starting container: edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234" id=2f2cb07f-bad3-4e40-833c-a6cd13847940 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.246118038Z" level=info msg="Started container" PID=1047 containerID=edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234 description=kube-system/kube-proxy-x8pr8/kube-proxy id=2f2cb07f-bad3-4e40-833c-a6cd13847940 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70ea8c5dc14cfab5e48d286779f08935e976f15cbce3b93a2940d3c409539f0a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	76ad4c36d0cb5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   c68f20e83b5b0       kindnet-2r5vl                               kube-system
	edf483adf40d4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   70ea8c5dc14cf       kube-proxy-x8pr8                            kube-system
	5ea3647cdd25e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   441e8675f9675       kube-apiserver-newest-cni-531189            kube-system
	1effea11fd2cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   6b76d0a64fcd5       etcd-newest-cni-531189                      kube-system
	fc8ea4b685078       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   1eea53c866f1e       kube-controller-manager-newest-cni-531189   kube-system
	2576ff49d776c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   ff012306097de       kube-scheduler-newest-cni-531189            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-531189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:33:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531189
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:33:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                2ea5badc-2e5c-4528-82d1-003ac6cb3bf5
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531189                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-2r5vl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-531189             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-531189    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-x8pr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-531189             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     31s                kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           26s                node-controller  Node newest-cni-531189 event: Registered Node newest-cni-531189 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 8s)    kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 8s)    kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 8s)    kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-531189 event: Registered Node newest-cni-531189 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e] <==
	{"level":"warn","ts":"2025-11-22T00:33:42.233646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.243287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.254465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.260995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.267626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.275310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.283569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.291220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.299491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.306877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.313976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.320120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.326903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.334446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.340644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.350670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.361081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.367920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.373812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.381240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.387925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.404155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.410696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.417114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.466043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:33:48 up  1:16,  0 user,  load average: 2.61, 2.85, 1.89
	Linux newest-cni-531189 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f] <==
	I1122 00:33:44.471742       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:44.472113       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:33:44.472239       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:44.472260       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:44.472372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:44.816028       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:44.816132       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:44.816156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:44.816347       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:45.116575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:45.116619       1 metrics.go:72] Registering metrics
	I1122 00:33:45.116704       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4] <==
	I1122 00:33:42.974599       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:33:42.974616       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:33:42.974796       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:33:42.974629       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:42.974937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:42.974755       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:33:42.975172       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:33:42.975198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:33:42.975222       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:33:42.975821       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:33:42.977875       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:42.986928       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:33:43.006718       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:43.289219       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:43.314008       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:43.330029       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:43.336900       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:43.343139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:43.373198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.123.59"}
	I1122 00:33:43.381602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.212.207"}
	I1122 00:33:43.873378       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:46.404899       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:46.706096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:46.803467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:46.803471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2] <==
	I1122 00:33:46.302000       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:33:46.303331       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:33:46.303416       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:33:46.303456       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:33:46.303497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:33:46.304046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:33:46.304077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:33:46.306272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:33:46.306292       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:33:46.306301       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:33:46.306438       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:46.309837       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:33:46.309910       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:33:46.311814       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:33:46.313769       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:46.314762       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:46.319680       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:46.322092       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:46.324370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:33:46.330665       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:33:46.333022       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:33:46.333216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:33:46.333306       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-531189"
	I1122 00:33:46.333351       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:33:46.353610       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234] <==
	I1122 00:33:44.287308       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:44.356220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:44.456385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:44.456415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:33:44.456531       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:44.478187       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:44.478278       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:44.485145       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:44.485510       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:44.485794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:44.490544       1 config.go:200] "Starting service config controller"
	I1122 00:33:44.490563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:44.490584       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:44.490590       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:44.490609       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:44.490614       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:44.490848       1 config.go:309] "Starting node config controller"
	I1122 00:33:44.490869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:44.590952       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:33:44.591113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:44.591146       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:44.591173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae] <==
	I1122 00:33:42.189329       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:33:43.138333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:43.138364       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:43.143558       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:43.143661       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:33:43.143675       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:33:43.143710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:43.144178       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.144234       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.144182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.144584       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.244755       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.244810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.244759       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: E1122 00:33:42.967805     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531189\" already exists" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: I1122 00:33:42.985988     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: E1122 00:33:42.996398     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531189\" already exists" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: I1122 00:33:42.996428     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.002955     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-531189\" already exists" pod="kube-system/kube-controller-manager-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.002986     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005586     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005736     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005777     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.006685     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.013103     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-531189\" already exists" pod="kube-system/kube-scheduler-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.013133     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.020662     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531189\" already exists" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.883634     670 apiserver.go:52] "Watching apiserver"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.920576     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.930041     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531189\" already exists" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.985292     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013245     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-xtables-lock\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013304     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-xtables-lock\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013354     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-lib-modules\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013376     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-lib-modules\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013412     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-cni-cfg\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531189 -n newest-cni-531189: exit status 2 (342.31564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g: exit status 1 (60.086912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bc2kh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zxbns" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-q9k8g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531189
helpers_test.go:243: (dbg) docker inspect newest-cni-531189:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	        "Created": "2025-11-22T00:33:00.30734986Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 277530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:34.537070411Z",
	            "FinishedAt": "2025-11-22T00:33:33.524111057Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hostname",
	        "HostsPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/hosts",
	        "LogPath": "/var/lib/docker/containers/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b/65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b-json.log",
	        "Name": "/newest-cni-531189",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-531189:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531189",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65c93ca663785ca6d510675e02ddb8c57cbafd33c1f68bf07a5dd2a4c309fb8b",
	                "LowerDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bcd7c4ef17fddfcbaac27bd8b3fcac3c9932d29e7097cf3cad054082bb292d8f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531189",
	                "Source": "/var/lib/docker/volumes/newest-cni-531189/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531189",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531189",
	                "name.minikube.sigs.k8s.io": "newest-cni-531189",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d7d07feab84663aa811c503b1cbd466a59e9e10ec57ecb7fd201fa9d8c8de344",
	            "SandboxKey": "/var/run/docker/netns/d7d07feab846",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531189": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8187a3a6ebbb0612319b5aa920a8b27ea6d7a8c6a1abed3774766a0afd701a8",
	                    "EndpointID": "2878be71a4175ed09eff22ce43be42f7c76ceb44521488e5ec1f8ab6a9031a84",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "02:79:d9:20:99:58",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531189",
	                        "65c93ca66378"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189: exit status 2 (315.125906ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531189 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-531189 logs -n 25: (1.041423909s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-377321 image list --format=json                                                                                                                                                                                               │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p old-k8s-version-377321 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p old-k8s-version-377321                                                                                                                                                                                                                     │ old-k8s-version-377321       │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p disable-driver-mounts-751225                                                                                                                                                                                                               │ disable-driver-mounts-751225 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ no-preload-983546 image list --format=json                                                                                                                                                                                                    │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ pause   │ -p no-preload-983546 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │                     │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:33:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:33:45.421187  280462 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:45.421310  280462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:45.421317  280462 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:45.421324  280462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:45.421645  280462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:33:45.422250  280462 out.go:368] Setting JSON to false
	I1122 00:33:45.423777  280462 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4574,"bootTime":1763767051,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:33:45.423854  280462 start.go:143] virtualization: kvm guest
	I1122 00:33:45.425693  280462 out.go:179] * [default-k8s-diff-port-046175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:33:45.426785  280462 notify.go:221] Checking for updates...
	I1122 00:33:45.426865  280462 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:33:45.428105  280462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:33:45.429792  280462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:33:45.430894  280462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:33:45.434297  280462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:33:45.435552  280462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:45.437246  280462 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:33:45.438085  280462 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:33:45.474515  280462 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:33:45.474618  280462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:45.543961  280462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:45.532674231 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:45.544120  280462 docker.go:319] overlay module found
	I1122 00:33:45.545587  280462 out.go:179] * Using the docker driver based on existing profile
	I1122 00:33:45.546567  280462 start.go:309] selected driver: docker
	I1122 00:33:45.546585  280462 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:45.546691  280462 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:33:45.547441  280462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:45.615649  280462 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:33:45.605134567 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:33:45.616021  280462 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:45.616077  280462 cni.go:84] Creating CNI manager for ""
	I1122 00:33:45.616151  280462 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:33:45.616236  280462 start.go:353] cluster config:
	{Name:default-k8s-diff-port-046175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-046175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:33:45.618824  280462 out.go:179] * Starting "default-k8s-diff-port-046175" primary control-plane node in "default-k8s-diff-port-046175" cluster
	I1122 00:33:45.619978  280462 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:33:45.621216  280462 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:33:45.622257  280462 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:33:45.622296  280462 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:33:45.622312  280462 cache.go:65] Caching tarball of preloaded images
	I1122 00:33:45.622391  280462 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:33:45.622424  280462 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:33:45.622442  280462 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:33:45.622571  280462 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/default-k8s-diff-port-046175/config.json ...
	I1122 00:33:45.645602  280462 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:33:45.645631  280462 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:33:45.645654  280462 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:33:45.645685  280462 start.go:360] acquireMachinesLock for default-k8s-diff-port-046175: {Name:mkead8b34d9557aba416ceaab7176eb30fd80326 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:33:45.645745  280462 start.go:364] duration metric: took 38.777µs to acquireMachinesLock for "default-k8s-diff-port-046175"
	I1122 00:33:45.645764  280462 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:33:45.645771  280462 fix.go:54] fixHost starting: 
	I1122 00:33:45.646065  280462 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:33:45.665218  280462 fix.go:112] recreateIfNeeded on default-k8s-diff-port-046175: state=Stopped err=<nil>
	W1122 00:33:45.665261  280462 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:33:45.650721  218533 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.579088732s
	I1122 00:33:46.532263  218533 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.460631451s
	I1122 00:33:48.073122  218533 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001484371s
	I1122 00:33:48.085959  218533 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:33:48.095355  218533 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:33:48.104637  218533 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:33:48.104905  218533 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-619859 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:33:48.112468  218533 kubeadm.go:319] [bootstrap-token] Using token: 06psfk.5ow9n1ple11k5104
	
	
	==> CRI-O <==
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.192198813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.194701326Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=99234de4-7d11-480f-b8a8-67397c927cb7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.195305436Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7d7dd9ef-8064-4c2c-a279-564c79b17166 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.196394959Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.196909284Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.197304125Z" level=info msg="Ran pod sandbox 70ea8c5dc14cfab5e48d286779f08935e976f15cbce3b93a2940d3c409539f0a with infra container: kube-system/kube-proxy-x8pr8/POD" id=99234de4-7d11-480f-b8a8-67397c927cb7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.197777056Z" level=info msg="Ran pod sandbox c68f20e83b5b0fe88a42fb702b5dfb2c6119b79e9e4fa88cb4f3e39549c0d34d with infra container: kube-system/kindnet-2r5vl/POD" id=7d7dd9ef-8064-4c2c-a279-564c79b17166 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.198300636Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=428486c6-6981-4d73-81ab-f54dd6a8560d name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.198793555Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=258d66c3-6c71-42a4-883f-e980dd207069 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.199188798Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=16a09e26-c384-4bb7-a0e1-b345a2f0fe6c name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.199597008Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f3371a60-8ae9-4929-935e-9b0f5938f52f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200165118Z" level=info msg="Creating container: kube-system/kube-proxy-x8pr8/kube-proxy" id=5872eff7-9f04-4328-96ac-4afb8e0df8df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.20030364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200618132Z" level=info msg="Creating container: kube-system/kindnet-2r5vl/kindnet-cni" id=349adfa4-9f20-4d6e-b263-30697f2e8e92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.200685445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.206234495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.206885207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.208753973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.209985904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.237227631Z" level=info msg="Created container 76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f: kube-system/kindnet-2r5vl/kindnet-cni" id=349adfa4-9f20-4d6e-b263-30697f2e8e92 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.237778576Z" level=info msg="Starting container: 76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f" id=139a1bf8-c1a8-4b8b-aea4-e54c08805e3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.239411149Z" level=info msg="Started container" PID=1048 containerID=76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f description=kube-system/kindnet-2r5vl/kindnet-cni id=139a1bf8-c1a8-4b8b-aea4-e54c08805e3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=c68f20e83b5b0fe88a42fb702b5dfb2c6119b79e9e4fa88cb4f3e39549c0d34d
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.242330625Z" level=info msg="Created container edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234: kube-system/kube-proxy-x8pr8/kube-proxy" id=5872eff7-9f04-4328-96ac-4afb8e0df8df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.242753238Z" level=info msg="Starting container: edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234" id=2f2cb07f-bad3-4e40-833c-a6cd13847940 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:44 newest-cni-531189 crio[516]: time="2025-11-22T00:33:44.246118038Z" level=info msg="Started container" PID=1047 containerID=edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234 description=kube-system/kube-proxy-x8pr8/kube-proxy id=2f2cb07f-bad3-4e40-833c-a6cd13847940 name=/runtime.v1.RuntimeService/StartContainer sandboxID=70ea8c5dc14cfab5e48d286779f08935e976f15cbce3b93a2940d3c409539f0a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	76ad4c36d0cb5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   c68f20e83b5b0       kindnet-2r5vl                               kube-system
	edf483adf40d4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   70ea8c5dc14cf       kube-proxy-x8pr8                            kube-system
	5ea3647cdd25e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   441e8675f9675       kube-apiserver-newest-cni-531189            kube-system
	1effea11fd2cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   6b76d0a64fcd5       etcd-newest-cni-531189                      kube-system
	fc8ea4b685078       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   1eea53c866f1e       kube-controller-manager-newest-cni-531189   kube-system
	2576ff49d776c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   ff012306097de       kube-scheduler-newest-cni-531189            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531189
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531189
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-531189
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:33:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531189
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:33:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:33:43 +0000   Sat, 22 Nov 2025 00:33:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531189
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                2ea5badc-2e5c-4528-82d1-003ac6cb3bf5
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531189                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-2r5vl                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-531189             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-531189    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-x8pr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-531189             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     33s                kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28s                node-controller  Node newest-cni-531189 event: Registered Node newest-cni-531189 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 10s)   kubelet          Node newest-cni-531189 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 10s)   kubelet          Node newest-cni-531189 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 10s)   kubelet          Node newest-cni-531189 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-531189 event: Registered Node newest-cni-531189 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [1effea11fd2cda1b8bc3e2c88c337972f678c2fb816b2a10fa07b43ee858b32e] <==
	{"level":"warn","ts":"2025-11-22T00:33:42.233646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.243287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.254465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.260995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.267626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.275310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.283569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.291220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.299491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.306877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.313976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.320120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.326903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.334446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.340644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.350670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.361081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.367920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.373812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.381240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.387925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.404155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.410696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.417114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:42.466043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:33:50 up  1:16,  0 user,  load average: 2.61, 2.85, 1.89
	Linux newest-cni-531189 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [76ad4c36d0cb528d5757c3c48dd71b26ede7443a405b5e3c3026dbeffc94229f] <==
	I1122 00:33:44.471742       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:44.472113       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:33:44.472239       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:44.472260       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:44.472372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:44.816028       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:44.816132       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:44.816156       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:44.816347       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:45.116575       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:45.116619       1 metrics.go:72] Registering metrics
	I1122 00:33:45.116704       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5ea3647cdd25ed9233dde0ab86c7c9b6c9d20bad6f655db096a4c60dd9ea96c4] <==
	I1122 00:33:42.974599       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:33:42.974616       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:33:42.974796       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:33:42.974629       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:42.974937       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:42.974755       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:33:42.975172       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:33:42.975198       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:33:42.975222       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:33:42.975821       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:33:42.977875       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:42.986928       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:33:43.006718       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:43.289219       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:43.314008       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:43.330029       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:43.336900       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:43.343139       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:43.373198       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.123.59"}
	I1122 00:33:43.381602       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.212.207"}
	I1122 00:33:43.873378       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:46.404899       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:46.706096       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:46.803467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:46.803471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fc8ea4b6850789f337a85f85c11398106a27065f0faec23e9f3eb4ac62e06fa2] <==
	I1122 00:33:46.302000       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:33:46.303331       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:33:46.303416       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:33:46.303456       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:33:46.303497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:33:46.304046       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:33:46.304077       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:33:46.306272       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:33:46.306292       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:33:46.306301       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:33:46.306438       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:46.309837       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:33:46.309910       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:33:46.311814       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:33:46.313769       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:46.314762       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:46.319680       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:46.322092       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:46.324370       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:33:46.330665       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:33:46.333022       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:33:46.333216       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:33:46.333306       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-531189"
	I1122 00:33:46.333351       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:33:46.353610       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [edf483adf40d4c94ed15cb89c5b6defd41c014811dc698e922830824433e3234] <==
	I1122 00:33:44.287308       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:44.356220       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:44.456385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:44.456415       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:33:44.456531       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:44.478187       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:44.478278       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:44.485145       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:44.485510       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:44.485794       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:44.490544       1 config.go:200] "Starting service config controller"
	I1122 00:33:44.490563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:44.490584       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:44.490590       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:44.490609       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:44.490614       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:44.490848       1 config.go:309] "Starting node config controller"
	I1122 00:33:44.490869       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:44.590952       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:33:44.591113       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:44.591146       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:44.591173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2576ff49d776c1851dc8a648545b019e99ee62e4689645fd58fcb7a560f111ae] <==
	I1122 00:33:42.189329       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:33:43.138333       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:43.138364       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:43.143558       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:43.143661       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:33:43.143675       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:33:43.143710       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:43.144178       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.144234       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.144182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.144584       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.244755       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:43.244810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:43.244759       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: E1122 00:33:42.967805     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531189\" already exists" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: I1122 00:33:42.985988     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: E1122 00:33:42.996398     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531189\" already exists" pod="kube-system/kube-apiserver-newest-cni-531189"
	Nov 22 00:33:42 newest-cni-531189 kubelet[670]: I1122 00:33:42.996428     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.002955     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-531189\" already exists" pod="kube-system/kube-controller-manager-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.002986     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005586     670 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005736     670 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.005777     670 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.006685     670 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.013103     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-531189\" already exists" pod="kube-system/kube-scheduler-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.013133     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.020662     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531189\" already exists" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.883634     670 apiserver.go:52] "Watching apiserver"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.920576     670 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: E1122 00:33:43.930041     670 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531189\" already exists" pod="kube-system/etcd-newest-cni-531189"
	Nov 22 00:33:43 newest-cni-531189 kubelet[670]: I1122 00:33:43.985292     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013245     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-xtables-lock\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013304     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-xtables-lock\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013354     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-lib-modules\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013376     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b238c04-98fa-46db-91e7-73a2ff0cb690-lib-modules\") pod \"kube-proxy-x8pr8\" (UID: \"5b238c04-98fa-46db-91e7-73a2ff0cb690\") " pod="kube-system/kube-proxy-x8pr8"
	Nov 22 00:33:44 newest-cni-531189 kubelet[670]: I1122 00:33:44.013412     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e3ab47c0-fc8c-4b02-8905-b3975fc5fe58-cni-cfg\") pod \"kindnet-2r5vl\" (UID: \"e3ab47c0-fc8c-4b02-8905-b3975fc5fe58\") " pod="kube-system/kindnet-2r5vl"
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:33:45 newest-cni-531189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531189 -n newest-cni-531189
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531189 -n newest-cni-531189: exit status 2 (366.914121ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531189 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g: exit status 1 (76.676379ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bc2kh" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zxbns" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-q9k8g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531189 describe pod coredns-66bc5c9577-bc2kh storage-provisioner dashboard-metrics-scraper-6ffb444bf9-zxbns kubernetes-dashboard-855c9754f9-q9k8g: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-084979 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-084979 --alsologtostderr -v=1: exit status 80 (2.228471289s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-084979 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:34:14.510419  290793 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:14.510676  290793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:14.510684  290793 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:14.510689  290793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:14.510857  290793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:14.511078  290793 out.go:368] Setting JSON to false
	I1122 00:34:14.511098  290793 mustload.go:66] Loading cluster: embed-certs-084979
	I1122 00:34:14.511411  290793 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:14.511798  290793 cli_runner.go:164] Run: docker container inspect embed-certs-084979 --format={{.State.Status}}
	I1122 00:34:14.530506  290793 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:34:14.530833  290793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:14.600769  290793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-22 00:34:14.588289298 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:14.601723  290793 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-084979 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:34:14.604139  290793 out.go:179] * Pausing node embed-certs-084979 ... 
	I1122 00:34:14.606410  290793 host.go:66] Checking if "embed-certs-084979" exists ...
	I1122 00:34:14.606747  290793 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:14.606787  290793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-084979
	I1122 00:34:14.628977  290793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/embed-certs-084979/id_rsa Username:docker}
	I1122 00:34:14.726450  290793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:14.740677  290793 pause.go:52] kubelet running: true
	I1122 00:34:14.740742  290793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:14.975080  290793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:14.975191  290793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:15.052457  290793 cri.go:89] found id: "214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d"
	I1122 00:34:15.052485  290793 cri.go:89] found id: "7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a"
	I1122 00:34:15.052499  290793 cri.go:89] found id: "63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	I1122 00:34:15.052505  290793 cri.go:89] found id: "b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7"
	I1122 00:34:15.052509  290793 cri.go:89] found id: "168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f"
	I1122 00:34:15.052514  290793 cri.go:89] found id: "7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11"
	I1122 00:34:15.052519  290793 cri.go:89] found id: "e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e"
	I1122 00:34:15.052528  290793 cri.go:89] found id: "b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd"
	I1122 00:34:15.052533  290793 cri.go:89] found id: "551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39"
	I1122 00:34:15.052541  290793 cri.go:89] found id: "eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	I1122 00:34:15.052552  290793 cri.go:89] found id: "0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455"
	I1122 00:34:15.052556  290793 cri.go:89] found id: ""
	I1122 00:34:15.052606  290793 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:15.065917  290793 retry.go:31] will retry after 187.759189ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:15Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:34:15.254259  290793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:15.270956  290793 pause.go:52] kubelet running: false
	I1122 00:34:15.271013  290793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:15.495595  290793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:15.495705  290793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:15.572857  290793 cri.go:89] found id: "214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d"
	I1122 00:34:15.572883  290793 cri.go:89] found id: "7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a"
	I1122 00:34:15.572888  290793 cri.go:89] found id: "63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	I1122 00:34:15.572892  290793 cri.go:89] found id: "b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7"
	I1122 00:34:15.572897  290793 cri.go:89] found id: "168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f"
	I1122 00:34:15.572902  290793 cri.go:89] found id: "7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11"
	I1122 00:34:15.572906  290793 cri.go:89] found id: "e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e"
	I1122 00:34:15.572910  290793 cri.go:89] found id: "b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd"
	I1122 00:34:15.572914  290793 cri.go:89] found id: "551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39"
	I1122 00:34:15.572922  290793 cri.go:89] found id: "eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	I1122 00:34:15.572928  290793 cri.go:89] found id: "0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455"
	I1122 00:34:15.572932  290793 cri.go:89] found id: ""
	I1122 00:34:15.572971  290793 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:15.586832  290793 retry.go:31] will retry after 198.755649ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:15Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:34:15.786315  290793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:15.799075  290793 pause.go:52] kubelet running: false
	I1122 00:34:15.799140  290793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:15.953258  290793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:15.953349  290793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:16.032934  290793 cri.go:89] found id: "214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d"
	I1122 00:34:16.032956  290793 cri.go:89] found id: "7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a"
	I1122 00:34:16.032962  290793 cri.go:89] found id: "63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	I1122 00:34:16.032966  290793 cri.go:89] found id: "b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7"
	I1122 00:34:16.032971  290793 cri.go:89] found id: "168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f"
	I1122 00:34:16.032976  290793 cri.go:89] found id: "7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11"
	I1122 00:34:16.032981  290793 cri.go:89] found id: "e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e"
	I1122 00:34:16.032986  290793 cri.go:89] found id: "b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd"
	I1122 00:34:16.032990  290793 cri.go:89] found id: "551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39"
	I1122 00:34:16.032999  290793 cri.go:89] found id: "eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	I1122 00:34:16.033004  290793 cri.go:89] found id: "0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455"
	I1122 00:34:16.033025  290793 cri.go:89] found id: ""
	I1122 00:34:16.033156  290793 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:16.045215  290793 retry.go:31] will retry after 394.91425ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:16Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:34:16.440753  290793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:16.453307  290793 pause.go:52] kubelet running: false
	I1122 00:34:16.453354  290793 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:16.594444  290793 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:16.594524  290793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:16.660095  290793 cri.go:89] found id: "214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d"
	I1122 00:34:16.660118  290793 cri.go:89] found id: "7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a"
	I1122 00:34:16.660124  290793 cri.go:89] found id: "63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	I1122 00:34:16.660129  290793 cri.go:89] found id: "b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7"
	I1122 00:34:16.660134  290793 cri.go:89] found id: "168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f"
	I1122 00:34:16.660139  290793 cri.go:89] found id: "7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11"
	I1122 00:34:16.660143  290793 cri.go:89] found id: "e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e"
	I1122 00:34:16.660147  290793 cri.go:89] found id: "b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd"
	I1122 00:34:16.660152  290793 cri.go:89] found id: "551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39"
	I1122 00:34:16.660162  290793 cri.go:89] found id: "eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	I1122 00:34:16.660166  290793 cri.go:89] found id: "0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455"
	I1122 00:34:16.660171  290793 cri.go:89] found id: ""
	I1122 00:34:16.660223  290793 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:16.673088  290793 out.go:203] 
	W1122 00:34:16.674174  290793 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:34:16.674196  290793 out.go:285] * 
	* 
	W1122 00:34:16.678227  290793 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:34:16.679396  290793 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-084979 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-084979
helpers_test.go:243: (dbg) docker inspect embed-certs-084979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	        "Created": "2025-11-22T00:31:48.222415176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:19.763704363Z",
	            "FinishedAt": "2025-11-22T00:33:18.875915274Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hosts",
	        "LogPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58-json.log",
	        "Name": "/embed-certs-084979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-084979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-084979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	                "LowerDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-084979",
	                "Source": "/var/lib/docker/volumes/embed-certs-084979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-084979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-084979",
	                "name.minikube.sigs.k8s.io": "embed-certs-084979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "00c34a0d6397cc783272ba8c63b628e63a5d89e440a413e263b6077ab7adcaa7",
	            "SandboxKey": "/var/run/docker/netns/00c34a0d6397",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-084979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d41c17c02e28b2753b6d078dd9a412b682778fc89e095be2adad8a79a3a99d8",
	                    "EndpointID": "d0347b03605f5059833eefe2b44e27a910145cfe7bbc3a67bd7e603fbee6f733",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5e:76:28:b2:2e:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-084979",
	                        "e8d02ad472d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979: exit status 2 (304.666034ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-084979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-084979 logs -n 25: (1.193923025s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-619859                                                                                                                                                                                                                  │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-239758               │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ image   │ embed-certs-084979 image list --format=json                                                                                                                                                                                                   │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p embed-certs-084979 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:34:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:34:00.311386  286707 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:00.311651  286707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:00.311662  286707 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:00.311670  286707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:00.311899  286707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:00.312407  286707 out.go:368] Setting JSON to false
	I1122 00:34:00.313669  286707 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4589,"bootTime":1763767051,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:34:00.313725  286707 start.go:143] virtualization: kvm guest
	I1122 00:34:00.315575  286707 out.go:179] * [kindnet-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:34:00.316959  286707 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:34:00.316989  286707 notify.go:221] Checking for updates...
	I1122 00:34:00.319162  286707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:34:00.320747  286707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:00.322032  286707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:34:00.323281  286707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:34:00.324325  286707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:55.874797  280462 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:33:55.879502  280462 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:33:55.880718  280462 api_server.go:141] control plane version: v1.34.1
	I1122 00:33:55.880747  280462 api_server.go:131] duration metric: took 507.052583ms to wait for apiserver health ...
	I1122 00:33:55.880759  280462 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:33:55.884681  280462 system_pods.go:59] 8 kube-system pods found
	I1122 00:33:55.884721  280462 system_pods.go:61] "coredns-66bc5c9577-np5nq" [6bf2527b-42f1-42dd-980e-e1006db2273d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:33:55.884733  280462 system_pods.go:61] "etcd-default-k8s-diff-port-046175" [13c461fc-31bf-48a9-afd5-e9d9d15ed8d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:55.884747  280462 system_pods.go:61] "kindnet-nqk28" [fd6ece46-cf0c-4d24-8859-aaf670c70fb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:55.884753  280462 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-046175" [baf53a5a-35f6-4d69-8adf-62d13c8d4d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:55.884763  280462 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-046175" [12f53e71-5518-4b0b-bcf0-7f99616fcf48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:55.884769  280462 system_pods.go:61] "kube-proxy-jdzcl" [f20a454d-e357-46bf-803b-f0166329db1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:55.884783  280462 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-046175" [042d05f3-1e4f-45d5-abad-8e69368f986c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:55.884788  280462 system_pods.go:61] "storage-provisioner" [5f32ba19-162c-4893-a387-2c8b492c1b6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:33:55.884798  280462 system_pods.go:74] duration metric: took 4.03168ms to wait for pod list to return data ...
	I1122 00:33:55.884808  280462 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:33:55.887037  280462 default_sa.go:45] found service account: "default"
	I1122 00:33:55.887082  280462 default_sa.go:55] duration metric: took 2.266482ms for default service account to be created ...
	I1122 00:33:55.887093  280462 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:33:55.889545  280462 system_pods.go:86] 8 kube-system pods found
	I1122 00:33:55.889575  280462 system_pods.go:89] "coredns-66bc5c9577-np5nq" [6bf2527b-42f1-42dd-980e-e1006db2273d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:33:55.889585  280462 system_pods.go:89] "etcd-default-k8s-diff-port-046175" [13c461fc-31bf-48a9-afd5-e9d9d15ed8d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:55.889602  280462 system_pods.go:89] "kindnet-nqk28" [fd6ece46-cf0c-4d24-8859-aaf670c70fb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:55.889610  280462 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-046175" [baf53a5a-35f6-4d69-8adf-62d13c8d4d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:55.889622  280462 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-046175" [12f53e71-5518-4b0b-bcf0-7f99616fcf48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:55.889629  280462 system_pods.go:89] "kube-proxy-jdzcl" [f20a454d-e357-46bf-803b-f0166329db1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:55.889637  280462 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-046175" [042d05f3-1e4f-45d5-abad-8e69368f986c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:55.889644  280462 system_pods.go:89] "storage-provisioner" [5f32ba19-162c-4893-a387-2c8b492c1b6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:33:55.889653  280462 system_pods.go:126] duration metric: took 2.55247ms to wait for k8s-apps to be running ...
	I1122 00:33:55.889662  280462 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:33:55.889708  280462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:55.904654  280462 system_svc.go:56] duration metric: took 14.984652ms WaitForService to wait for kubelet
	I1122 00:33:55.904682  280462 kubeadm.go:587] duration metric: took 3.142754426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:55.904704  280462 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:33:55.907554  280462 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:33:55.907591  280462 node_conditions.go:123] node cpu capacity is 8
	I1122 00:33:55.907611  280462 node_conditions.go:105] duration metric: took 2.900646ms to run NodePressure ...
	I1122 00:33:55.907626  280462 start.go:242] waiting for startup goroutines ...
	I1122 00:33:55.907639  280462 start.go:247] waiting for cluster config update ...
	I1122 00:33:55.907657  280462 start.go:256] writing updated cluster config ...
	I1122 00:33:55.907940  280462 ssh_runner.go:195] Run: rm -f paused
	I1122 00:33:55.912033  280462 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:33:55.916393  280462 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np5nq" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:33:57.924716  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:00.325736  286707 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.325829  286707 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.325918  286707 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.326012  286707 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:34:00.354118  286707 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:34:00.354249  286707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:00.424763  286707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-22 00:34:00.413840037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:00.424865  286707 docker.go:319] overlay module found
	I1122 00:34:00.426048  286707 out.go:179] * Using the docker driver based on user configuration
	I1122 00:34:00.427021  286707 start.go:309] selected driver: docker
	I1122 00:34:00.427033  286707 start.go:930] validating driver "docker" against <nil>
	I1122 00:34:00.427043  286707 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:34:00.427823  286707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:00.502816  286707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-22 00:34:00.481212832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:00.503015  286707 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:34:00.503286  286707 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:00.508496  286707 out.go:179] * Using Docker driver with root privileges
	I1122 00:34:00.509539  286707 cni.go:84] Creating CNI manager for "kindnet"
	I1122 00:34:00.509559  286707 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:34:00.509640  286707 start.go:353] cluster config:
	{Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:00.511038  286707 out.go:179] * Starting "kindnet-239758" primary control-plane node in "kindnet-239758" cluster
	I1122 00:34:00.512121  286707 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:34:00.513194  286707 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:34:00.514142  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:00.514175  286707 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:34:00.514183  286707 cache.go:65] Caching tarball of preloaded images
	I1122 00:34:00.514237  286707 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:34:00.514283  286707 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:34:00.514299  286707 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:34:00.514419  286707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json ...
	I1122 00:34:00.514447  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json: {Name:mk5be5a31b2aa4847f9905932e541b5e55d80175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:00.536861  286707 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:34:00.536882  286707 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:34:00.536900  286707 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:34:00.536935  286707 start.go:360] acquireMachinesLock for kindnet-239758: {Name:mkee69b5fdeef63bab530fe4c4745691b367114b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:34:00.537035  286707 start.go:364] duration metric: took 79.354µs to acquireMachinesLock for "kindnet-239758"
	I1122 00:34:00.537088  286707 start.go:93] Provisioning new machine with config: &{Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:00.537184  286707 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:34:00.804735  271909 pod_ready.go:104] pod "coredns-66bc5c9577-jjldt" is not "Ready", error: <nil>
	I1122 00:34:01.303525  271909 pod_ready.go:94] pod "coredns-66bc5c9577-jjldt" is "Ready"
	I1122 00:34:01.303562  271909 pod_ready.go:86] duration metric: took 31.006240985s for pod "coredns-66bc5c9577-jjldt" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.306171  271909 pod_ready.go:83] waiting for pod "etcd-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.311315  271909 pod_ready.go:94] pod "etcd-embed-certs-084979" is "Ready"
	I1122 00:34:01.311339  271909 pod_ready.go:86] duration metric: took 5.144616ms for pod "etcd-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.314302  271909 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.319601  271909 pod_ready.go:94] pod "kube-apiserver-embed-certs-084979" is "Ready"
	I1122 00:34:01.319621  271909 pod_ready.go:86] duration metric: took 5.292789ms for pod "kube-apiserver-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.322868  271909 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.501261  271909 pod_ready.go:94] pod "kube-controller-manager-embed-certs-084979" is "Ready"
	I1122 00:34:01.501303  271909 pod_ready.go:86] duration metric: took 178.409756ms for pod "kube-controller-manager-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.701517  271909 pod_ready.go:83] waiting for pod "kube-proxy-lsc2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.101840  271909 pod_ready.go:94] pod "kube-proxy-lsc2k" is "Ready"
	I1122 00:34:02.101874  271909 pod_ready.go:86] duration metric: took 400.326166ms for pod "kube-proxy-lsc2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.301925  271909 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.703010  271909 pod_ready.go:94] pod "kube-scheduler-embed-certs-084979" is "Ready"
	I1122 00:34:02.703040  271909 pod_ready.go:86] duration metric: took 401.090623ms for pod "kube-scheduler-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.703067  271909 pod_ready.go:40] duration metric: took 32.409029552s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:02.762926  271909 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:02.764590  271909 out.go:179] * Done! kubectl is now configured to use "embed-certs-084979" cluster and "default" namespace by default
	I1122 00:33:59.687346  284750 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.774367656s)
	I1122 00:33:59.687376  284750 kic.go:203] duration metric: took 4.774507848s to extract preloaded images to volume ...
	W1122 00:33:59.687465  284750 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:33:59.687526  284750 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:33:59.687574  284750 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:33:59.745536  284750 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-239758 --name auto-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-239758 --network auto-239758 --ip 192.168.76.2 --volume auto-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:00.337114  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Running}}
	I1122 00:34:00.357568  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.379188  284750 cli_runner.go:164] Run: docker exec auto-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:00.436041  284750 oci.go:144] the created container "auto-239758" has a running status.
	I1122 00:34:00.436083  284750 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa...
	I1122 00:34:00.525833  284750 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:00.556190  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.579712  284750 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:00.579736  284750 kic_runner.go:114] Args: [docker exec --privileged auto-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:00.623465  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.648969  284750 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:00.649108  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:00.678962  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:00.679457  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:00.679527  284750 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:00.680453  284750 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55264->127.0.0.1:33103: read: connection reset by peer
	I1122 00:34:03.821813  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-239758
	
	I1122 00:34:03.821881  284750 ubuntu.go:182] provisioning hostname "auto-239758"
	I1122 00:34:03.821976  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:03.843983  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:03.844324  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:03.844352  284750 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-239758 && echo "auto-239758" | sudo tee /etc/hostname
	I1122 00:34:00.539579  286707 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:34:00.539763  286707 start.go:159] libmachine.API.Create for "kindnet-239758" (driver="docker")
	I1122 00:34:00.539790  286707 client.go:173] LocalClient.Create starting
	I1122 00:34:00.539867  286707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:34:00.539906  286707 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:00.539924  286707 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:00.539980  286707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:34:00.540001  286707 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:00.540034  286707 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:00.540529  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:34:00.560291  286707 cli_runner.go:211] docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:34:00.560406  286707 network_create.go:284] running [docker network inspect kindnet-239758] to gather additional debugging logs...
	I1122 00:34:00.560429  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758
	W1122 00:34:00.580801  286707 cli_runner.go:211] docker network inspect kindnet-239758 returned with exit code 1
	I1122 00:34:00.580832  286707 network_create.go:287] error running [docker network inspect kindnet-239758]: docker network inspect kindnet-239758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-239758 not found
	I1122 00:34:00.580848  286707 network_create.go:289] output of [docker network inspect kindnet-239758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-239758 not found
	
	** /stderr **
	I1122 00:34:00.580991  286707 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:00.603691  286707 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:34:00.604709  286707 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:34:00.605741  286707 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:34:00.606544  286707 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8fcd7657b64b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:ad:c5:eb:8c:57} reservation:<nil>}
	I1122 00:34:00.607337  286707 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-85b8c03d926b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8e:84:e4:fa:a8} reservation:<nil>}
	I1122 00:34:00.608010  286707 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0d41c17c02e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:e2:39:6f:dd:b9:0c} reservation:<nil>}
	I1122 00:34:00.609098  286707 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f53530}
	I1122 00:34:00.609132  286707 network_create.go:124] attempt to create docker network kindnet-239758 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1122 00:34:00.609198  286707 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-239758 kindnet-239758
	I1122 00:34:00.674766  286707 network_create.go:108] docker network kindnet-239758 192.168.103.0/24 created
	I1122 00:34:00.674805  286707 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-239758" container
	I1122 00:34:00.674876  286707 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:34:00.699124  286707 cli_runner.go:164] Run: docker volume create kindnet-239758 --label name.minikube.sigs.k8s.io=kindnet-239758 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:34:00.719962  286707 oci.go:103] Successfully created a docker volume kindnet-239758
	I1122 00:34:00.720153  286707 cli_runner.go:164] Run: docker run --rm --name kindnet-239758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-239758 --entrypoint /usr/bin/test -v kindnet-239758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:34:01.176816  286707 oci.go:107] Successfully prepared a docker volume kindnet-239758
	I1122 00:34:01.176887  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:01.176904  286707 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:34:01.176991  286707 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:34:00.422572  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:02.923012  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:04.118824  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-239758
	
	I1122 00:34:04.119003  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.142610  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:04.142962  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:04.142996  284750 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:04.272392  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:04.272425  284750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:04.272451  284750 ubuntu.go:190] setting up certificates
	I1122 00:34:04.272463  284750 provision.go:84] configureAuth start
	I1122 00:34:04.272521  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:04.289928  284750 provision.go:143] copyHostCerts
	I1122 00:34:04.289988  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:04.289996  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:04.292375  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:04.292576  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:04.292611  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:04.292655  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:04.292730  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:04.292739  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:04.292779  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:04.292846  284750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.auto-239758 san=[127.0.0.1 192.168.76.2 auto-239758 localhost minikube]
	I1122 00:34:04.406300  284750 provision.go:177] copyRemoteCerts
	I1122 00:34:04.406361  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:04.406419  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.424432  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:04.516478  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:04.565871  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:04.588963  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:34:04.606955  284750 provision.go:87] duration metric: took 334.476703ms to configureAuth
	I1122 00:34:04.606984  284750 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:04.607185  284750 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:04.607340  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.625114  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:04.625403  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:04.625423  284750 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:34:05.167489  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:05.167519  284750 machine.go:97] duration metric: took 4.518520944s to provisionDockerMachine
	I1122 00:34:05.167531  284750 client.go:176] duration metric: took 10.981764453s to LocalClient.Create
	I1122 00:34:05.167547  284750 start.go:167] duration metric: took 10.981824149s to libmachine.API.Create "auto-239758"
	I1122 00:34:05.167559  284750 start.go:293] postStartSetup for "auto-239758" (driver="docker")
	I1122 00:34:05.167570  284750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:05.167647  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:05.167687  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.185427  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.372282  284750 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:05.375818  284750 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:05.375843  284750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:05.375853  284750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:05.375911  284750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:05.376006  284750 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:05.376137  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:05.383538  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:05.485785  284750 start.go:296] duration metric: took 318.214083ms for postStartSetup
	I1122 00:34:05.546107  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:05.564097  284750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/config.json ...
	I1122 00:34:05.650901  284750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:05.650979  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.668711  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.755657  284750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:05.759902  284750 start.go:128] duration metric: took 11.576041986s to createHost
	I1122 00:34:05.759926  284750 start.go:83] releasing machines lock for "auto-239758", held for 11.576180726s
	I1122 00:34:05.759987  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:05.777243  284750 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:05.777285  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.777325  284750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:05.777424  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.796354  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.796714  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.881752  284750 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:05.942870  284750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:05.979281  284750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:05.983997  284750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:05.984074  284750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:06.228989  284750 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:06.229018  284750 start.go:496] detecting cgroup driver to use...
	I1122 00:34:06.229048  284750 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:06.229108  284750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:06.244914  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:06.256462  284750 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:06.256571  284750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:06.271664  284750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:06.287714  284750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:06.374112  284750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:06.534222  284750 docker.go:234] disabling docker service ...
	I1122 00:34:06.534301  284750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:06.552211  284750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:06.564349  284750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:06.695008  284750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:06.843695  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:06.856688  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:06.872231  284750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:06.872294  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.882409  284750 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:06.882472  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.891665  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.901265  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.910300  284750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:06.919165  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.927846  284750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.944623  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.956159  284750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:06.966345  284750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:06.976921  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:07.105566  284750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:34:07.303438  284750 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:07.303512  284750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:07.309697  284750 start.go:564] Will wait 60s for crictl version
	I1122 00:34:07.309759  284750 ssh_runner.go:195] Run: which crictl
	I1122 00:34:07.314462  284750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:07.347108  284750 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:07.347191  284750 ssh_runner.go:195] Run: crio --version
	I1122 00:34:07.389151  284750 ssh_runner.go:195] Run: crio --version
	I1122 00:34:07.434508  284750 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:07.436260  284750 cli_runner.go:164] Run: docker network inspect auto-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:07.462395  284750 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:07.467696  284750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:07.485317  284750 kubeadm.go:884] updating cluster {Name:auto-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:07.485480  284750 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:07.485548  284750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:07.528026  284750 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:07.528076  284750 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:07.528134  284750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:07.574918  284750 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:07.575107  284750 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:07.575137  284750 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:07.575299  284750 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:34:07.575406  284750 ssh_runner.go:195] Run: crio config
	I1122 00:34:07.640432  284750 cni.go:84] Creating CNI manager for ""
	I1122 00:34:07.640487  284750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:34:07.640505  284750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:07.640527  284750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-239758 NodeName:auto-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:07.640655  284750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:07.640709  284750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:07.653367  284750 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:07.653438  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:07.666340  284750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1122 00:34:07.687710  284750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:07.711930  284750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1122 00:34:07.732223  284750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:07.736779  284750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:07.750898  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:07.858680  284750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:07.883886  284750 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758 for IP: 192.168.76.2
	I1122 00:34:07.883910  284750 certs.go:195] generating shared ca certs ...
	I1122 00:34:07.883931  284750 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.884352  284750 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:07.884635  284750 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:07.884665  284750 certs.go:257] generating profile certs ...
	I1122 00:34:07.884744  284750 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key
	I1122 00:34:07.884771  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt with IP's: []
	I1122 00:34:07.978603  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt ...
	I1122 00:34:07.978642  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt: {Name:mkfc1184f4ba320b02dd5ec6ab99f2616684acae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.978825  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key ...
	I1122 00:34:07.978844  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key: {Name:mkf59416b7d53191b3a67243ba8eb72950bb0642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.978985  284750 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727
	I1122 00:34:07.979011  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:34:08.048292  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 ...
	I1122 00:34:08.048325  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727: {Name:mk215cfd1a9a36c821e4052a239d52967b43892c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.048526  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727 ...
	I1122 00:34:08.048562  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727: {Name:mke5dfec5af6ec29ceb216011f111cb78b25a57b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.048694  284750 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt
	I1122 00:34:08.048808  284750 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key
	I1122 00:34:08.048910  284750 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key
	I1122 00:34:08.048927  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt with IP's: []
	I1122 00:34:08.093124  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt ...
	I1122 00:34:08.093155  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt: {Name:mk3be27dd950073f5eb01d6f27ac19270180f360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.093403  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key ...
	I1122 00:34:08.093425  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key: {Name:mk0fdf50514a0c36cbff6b5580bafb5956031ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.093683  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:08.093732  284750 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:08.093747  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:08.093779  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:08.093842  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:08.093955  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:08.094028  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:08.094841  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:08.117896  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:08.139903  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:08.162632  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:08.181918  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1122 00:34:08.198669  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:08.217021  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:08.237536  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:34:08.255084  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:08.274447  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:08.292371  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:08.313406  284750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:08.327707  284750 ssh_runner.go:195] Run: openssl version
	I1122 00:34:08.335128  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:08.344923  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.349225  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.349291  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.401896  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:08.412848  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:08.423466  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.428042  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.428107  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.481001  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:08.492426  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:08.502732  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.507224  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.507276  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.563489  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:08.574437  284750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:08.578981  284750 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:08.579043  284750 kubeadm.go:401] StartCluster: {Name:auto-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:08.579161  284750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:08.579238  284750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:08.613589  284750 cri.go:89] found id: ""
	I1122 00:34:08.613653  284750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:08.623321  284750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:08.632641  284750 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:08.632691  284750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:08.642362  284750 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:08.642379  284750 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:08.642419  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:08.651933  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:08.651985  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:08.661872  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:08.671909  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:08.671957  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:08.681346  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:08.691576  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:08.691629  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:08.699968  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:08.709825  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:08.709878  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:08.719658  284750 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:08.768404  284750 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:08.768481  284750 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:08.795229  284750 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:08.795350  284750 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:08.795418  284750 kubeadm.go:319] OS: Linux
	I1122 00:34:08.795482  284750 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:08.795562  284750 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:08.795639  284750 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:08.795713  284750 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:08.795794  284750 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:08.795882  284750 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:08.795950  284750 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:08.796068  284750 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:08.865279  284750 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:08.865455  284750 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:08.865589  284750 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:08.873154  284750 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:34:08.878355  284750 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:08.878464  284750 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:08.878583  284750 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:06.781846  286707 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.604780911s)
	I1122 00:34:06.781887  286707 kic.go:203] duration metric: took 5.604978509s to extract preloaded images to volume ...
	W1122 00:34:06.782006  286707 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:34:06.782049  286707 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:34:06.782130  286707 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:34:06.842182  286707 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-239758 --name kindnet-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-239758 --network kindnet-239758 --ip 192.168.103.2 --volume kindnet-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:07.204655  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Running}}
	I1122 00:34:07.229271  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.253505  286707 cli_runner.go:164] Run: docker exec kindnet-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:07.310111  286707 oci.go:144] the created container "kindnet-239758" has a running status.
	I1122 00:34:07.310165  286707 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa...
	I1122 00:34:07.541986  286707 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:07.582736  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.610137  286707 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:07.610222  286707 kic_runner.go:114] Args: [docker exec --privileged kindnet-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:07.672689  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.702200  286707 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:07.702373  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:07.729237  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:07.729964  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:07.730007  286707 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:07.871626  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-239758
	
	I1122 00:34:07.871670  286707 ubuntu.go:182] provisioning hostname "kindnet-239758"
	I1122 00:34:07.871736  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:07.896204  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:07.896540  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:07.896566  286707 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-239758 && echo "kindnet-239758" | sudo tee /etc/hostname
	I1122 00:34:08.051684  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-239758
	
	I1122 00:34:08.051768  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.073984  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:08.074284  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:08.074330  286707 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:08.213364  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:08.213390  286707 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:08.213422  286707 ubuntu.go:190] setting up certificates
	I1122 00:34:08.213435  286707 provision.go:84] configureAuth start
	I1122 00:34:08.213495  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:08.235768  286707 provision.go:143] copyHostCerts
	I1122 00:34:08.235832  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:08.235841  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:08.235892  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:08.235983  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:08.235993  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:08.236022  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:08.236098  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:08.236109  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:08.236153  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:08.236206  286707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.kindnet-239758 san=[127.0.0.1 192.168.103.2 kindnet-239758 localhost minikube]
	I1122 00:34:08.447938  286707 provision.go:177] copyRemoteCerts
	I1122 00:34:08.447996  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:08.448043  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.470946  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:08.572892  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:08.599559  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:08.621488  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1122 00:34:08.643004  286707 provision.go:87] duration metric: took 429.558065ms to configureAuth
	I1122 00:34:08.643027  286707 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:08.643359  286707 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:08.643486  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.665788  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:08.666160  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:08.666189  286707 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:34:08.971415  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:08.971444  286707 machine.go:97] duration metric: took 1.269155824s to provisionDockerMachine
	I1122 00:34:08.971460  286707 client.go:176] duration metric: took 8.431659485s to LocalClient.Create
	I1122 00:34:08.971486  286707 start.go:167] duration metric: took 8.431722153s to libmachine.API.Create "kindnet-239758"
	I1122 00:34:08.971502  286707 start.go:293] postStartSetup for "kindnet-239758" (driver="docker")
	I1122 00:34:08.971519  286707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:08.971614  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:08.971669  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.993625  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.085884  286707 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:09.089670  286707 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:09.089702  286707 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:09.089714  286707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:09.089768  286707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:09.089859  286707 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:09.089976  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:09.097639  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:09.118650  286707 start.go:296] duration metric: took 147.131879ms for postStartSetup
	I1122 00:34:09.119156  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:09.138426  286707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json ...
	I1122 00:34:09.138673  286707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:09.138719  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.160473  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.249627  286707 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:09.254093  286707 start.go:128] duration metric: took 8.716893046s to createHost
	I1122 00:34:09.254113  286707 start.go:83] releasing machines lock for "kindnet-239758", held for 8.717064873s
	I1122 00:34:09.254174  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:09.272883  286707 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:09.272928  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.272955  286707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:09.273021  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.290441  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.291435  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.451506  286707 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:09.457957  286707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:09.491607  286707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:09.496088  286707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:09.496151  286707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:09.520321  286707 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:09.520344  286707 start.go:496] detecting cgroup driver to use...
	I1122 00:34:09.520371  286707 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:09.520410  286707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:09.536015  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:09.548165  286707 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:09.548235  286707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:09.566505  286707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:09.585506  286707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:09.667413  286707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:09.756721  286707 docker.go:234] disabling docker service ...
	I1122 00:34:09.756781  286707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:09.774004  286707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:09.786746  286707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:09.871875  286707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:09.960016  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:09.971742  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:09.985150  286707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:09.985199  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:09.994664  286707 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:09.994717  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.003251  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.011083  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.019408  286707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:10.026820  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.034746  286707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.048757  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.058245  286707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:10.066184  286707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:10.073098  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:10.163102  286707 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1122 00:34:05.421490  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:07.426493  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:09.922074  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:10.640673  286707 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:10.640740  286707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:10.644606  286707 start.go:564] Will wait 60s for crictl version
	I1122 00:34:10.644664  286707 ssh_runner.go:195] Run: which crictl
	I1122 00:34:10.648089  286707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:10.671508  286707 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:10.671580  286707 ssh_runner.go:195] Run: crio --version
	I1122 00:34:10.698099  286707 ssh_runner.go:195] Run: crio --version
	I1122 00:34:10.725264  286707 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:10.726363  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:10.744402  286707 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:10.748357  286707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:10.758403  286707 kubeadm.go:884] updating cluster {Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:10.758534  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:10.758592  286707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:10.791205  286707 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:10.791224  286707 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:10.791266  286707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:10.816235  286707 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:10.816252  286707 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:10.816259  286707 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:10.816340  286707 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1122 00:34:10.816401  286707 ssh_runner.go:195] Run: crio config
	I1122 00:34:10.860398  286707 cni.go:84] Creating CNI manager for "kindnet"
	I1122 00:34:10.860429  286707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:10.860459  286707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-239758 NodeName:kindnet-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:10.860625  286707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:10.860702  286707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:10.868685  286707 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:10.868746  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:10.876447  286707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1122 00:34:10.889003  286707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:10.904252  286707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1122 00:34:10.917320  286707 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:10.921876  286707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:10.931993  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:11.011069  286707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:11.033111  286707 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758 for IP: 192.168.103.2
	I1122 00:34:11.033132  286707 certs.go:195] generating shared ca certs ...
	I1122 00:34:11.033150  286707 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.033334  286707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:11.033402  286707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:11.033429  286707 certs.go:257] generating profile certs ...
	I1122 00:34:11.033504  286707 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key
	I1122 00:34:11.033527  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt with IP's: []
	I1122 00:34:11.163412  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt ...
	I1122 00:34:11.163439  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt: {Name:mk12e35357bc50b638b9d2807f95f0d949aa140f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.163629  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key ...
	I1122 00:34:11.163672  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key: {Name:mk80e4f10e8dfe338873fa0d5bb88cf1cd2ebf1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.163840  286707 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc
	I1122 00:34:11.163871  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1122 00:34:11.229424  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc ...
	I1122 00:34:11.229444  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc: {Name:mk828042cef16f2793302001c0a212c42c1fb697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.229574  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc ...
	I1122 00:34:11.229595  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc: {Name:mk22bb07d09afcea8c9ea84c225ef6ad224c541c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.229708  286707 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt
	I1122 00:34:11.229804  286707 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key
	I1122 00:34:11.229884  286707 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key
	I1122 00:34:11.229904  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt with IP's: []
	I1122 00:34:11.275333  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt ...
	I1122 00:34:11.275355  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt: {Name:mk12cd1a58856d0f6c69eb05633c61234555c032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.275500  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key ...
	I1122 00:34:11.275519  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key: {Name:mk904b4392ca08cd697ca3ff5a09755d4d269881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.275723  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:11.275762  286707 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:11.275771  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:11.275805  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:11.275841  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:11.275876  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:11.275942  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:11.276817  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:11.295151  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:11.311357  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:11.327627  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:11.343732  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:34:11.359470  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:11.375586  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:11.392600  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:34:11.408513  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:11.427219  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:11.443637  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:11.459368  286707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:11.470726  286707 ssh_runner.go:195] Run: openssl version
	I1122 00:34:11.476163  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:11.484980  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.488518  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.488568  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.522485  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:11.529986  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:11.537735  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.541330  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.541379  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.574711  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:11.582469  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:11.589981  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.593291  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.593327  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.626725  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:11.634291  286707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:11.637639  286707 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:11.637698  286707 kubeadm.go:401] StartCluster: {Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:11.637765  286707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:11.637797  286707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:11.665626  286707 cri.go:89] found id: ""
	I1122 00:34:11.665689  286707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:11.674708  286707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:11.683213  286707 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:11.683266  286707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:11.690937  286707 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:11.690951  286707 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:11.690987  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:11.698250  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:11.698300  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:11.704901  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:11.711885  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:11.711922  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:11.718603  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:11.725359  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:11.725406  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:11.732132  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:11.738942  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:11.738982  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:11.746080  286707 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:11.783345  286707 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:11.783398  286707 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:11.816825  286707 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:11.816907  286707 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:11.816950  286707 kubeadm.go:319] OS: Linux
	I1122 00:34:11.817003  286707 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:11.817100  286707 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:11.817197  286707 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:11.817281  286707 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:11.817372  286707 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:11.817464  286707 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:11.817567  286707 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:11.817640  286707 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:11.877114  286707 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:11.877248  286707 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:11.877387  286707 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:11.884160  286707 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:34:09.133411  284750 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:09.263323  284750 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:09.661009  284750 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:09.894638  284750 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:10.397480  284750 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:10.397622  284750 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-239758 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:34:10.471165  284750 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:10.471352  284750 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-239758 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:34:10.776587  284750 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:11.476006  284750 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:34:11.677481  284750 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:11.677588  284750 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:12.172097  284750 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:12.272173  284750 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:12.868262  284750 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:13.332139  284750 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:34:13.668445  284750 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:13.668950  284750 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:13.672614  284750 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:34:13.673981  284750 out.go:252]   - Booting up control plane ...
	I1122 00:34:13.674112  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:34:13.674213  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:34:13.674784  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:34:13.689077  284750 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:34:13.689224  284750 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:34:13.695478  284750 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:34:13.695777  284750 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:34:13.695863  284750 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:34:13.794118  284750 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:34:13.794291  284750 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:34:11.886792  286707 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:11.886883  286707 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:11.886961  286707 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:12.028427  286707 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:12.128735  286707 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:12.766322  286707 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:12.834781  286707 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:12.907610  286707 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:12.907722  286707 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-239758 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:34:13.114850  286707 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:13.114986  286707 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-239758 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:34:13.387144  286707 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:13.802354  286707 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:34:13.910427  286707 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:13.910569  286707 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:13.981136  286707 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:14.397041  286707 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:14.873747  286707 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:15.287342  286707 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	W1122 00:34:12.422242  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:14.922910  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:16.007260  286707 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:16.008023  286707 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:16.012589  286707 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 22 00:33:39 embed-certs-084979 crio[569]: time="2025-11-22T00:33:39.831417764Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:33:40 embed-certs-084979 crio[569]: time="2025-11-22T00:33:40.058796619Z" level=info msg="Removing container: d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2" id=ae7c41dc-0020-448c-8b83-3fcda50ed8a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:33:40 embed-certs-084979 crio[569]: time="2025-11-22T00:33:40.101677368Z" level=info msg="Removed container d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=ae7c41dc-0020-448c-8b83-3fcda50ed8a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:33:57 embed-certs-084979 crio[569]: time="2025-11-22T00:33:57.988452267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bd107cd5-0758-48ef-9a12-12f52c755863 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.024537725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11f339df-3fef-4ed6-8ea6-a18c0e93490f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.025728245Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=23a9b3f6-6d6f-4d9a-844d-113abeacaa4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.025872047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.061698244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.062331411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.095669963Z" level=info msg="Created container eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=23a9b3f6-6d6f-4d9a-844d-113abeacaa4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.09634497Z" level=info msg="Starting container: eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791" id=e92441e6-e12d-4728-a5f5-1ea1122e27b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.098553714Z" level=info msg="Started container" PID=1771 containerID=eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper id=e92441e6-e12d-4728-a5f5-1ea1122e27b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa07ede9d3e9c59e215c3ff077fb908d4a4145e014d55700511881f47ee14512
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.117434749Z" level=info msg="Removing container: 8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f" id=dae9cb3c-70a4-404a-8fae-ba1ec0c8e0f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.117963423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bc35895-41e8-47d4-9b6d-a725f60ff4e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.119605102Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=79524748-1522-43ba-b082-f931d2ba125b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.121098771Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c225867e-bc05-4dc4-babb-68bf1f8c1a17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.121233564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125619762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125816328Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/280aa55ec0e03cfa840ec220975c34c2ed5ade669cc6796fc11859d513100364/merged/etc/passwd: no such file or directory"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125850015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/280aa55ec0e03cfa840ec220975c34c2ed5ade669cc6796fc11859d513100364/merged/etc/group: no such file or directory"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.126210891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.140949696Z" level=info msg="Removed container 8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=dae9cb3c-70a4-404a-8fae-ba1ec0c8e0f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.1569336Z" level=info msg="Created container 214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d: kube-system/storage-provisioner/storage-provisioner" id=c225867e-bc05-4dc4-babb-68bf1f8c1a17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.157488479Z" level=info msg="Starting container: 214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d" id=9dd925cf-479f-427c-951b-2e5d8b8345de name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.159539997Z" level=info msg="Started container" PID=1790 containerID=214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d description=kube-system/storage-provisioner/storage-provisioner id=9dd925cf-479f-427c-951b-2e5d8b8345de name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b132aea60feb510ae85fd376e6dab377b9269f3bfb01b83a6a2133c82a52d54
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	214f0202a39ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   0b132aea60feb       storage-provisioner                          kube-system
	eac069e8ad82b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   aa07ede9d3e9c       dashboard-metrics-scraper-6ffb444bf9-dxs97   kubernetes-dashboard
	0bc2f72c37d29       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   60aa0f9b1cce5       kubernetes-dashboard-855c9754f9-qrrmd        kubernetes-dashboard
	de7358749b24c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   355e0322d05e0       busybox                                      default
	7a3b2db058ecc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   99c94fd31635f       coredns-66bc5c9577-jjldt                     kube-system
	63a0c0dc4e6cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   0b132aea60feb       storage-provisioner                          kube-system
	b2cdb618d6f51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   37acf4f7f988b       kindnet-57bxk                                kube-system
	168f33d068d77       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   b1b3dec0799ed       kube-proxy-lsc2k                             kube-system
	7a9dde98c18cd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   5c872cbfb36f7       kube-scheduler-embed-certs-084979            kube-system
	e8c7c674c4b54       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   2d3a3322e90a1       kube-controller-manager-embed-certs-084979   kube-system
	b3fad9a866aee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   03ebd62584363       etcd-embed-certs-084979                      kube-system
	551c0189a8734       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   8c588a2a43268       kube-apiserver-embed-certs-084979            kube-system
	
	
	==> coredns [7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52655 - 57844 "HINFO IN 4175754057319742776.6489283951726980942. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.46913766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-084979
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-084979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-084979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_32_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-084979
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-084979
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                c16ffbd2-b440-4b5b-8f37-f7fb083b435c
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 coredns-66bc5c9577-jjldt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m9s
	  kube-system                 etcd-embed-certs-084979                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m14s
	  kube-system                 kindnet-57bxk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m9s
	  kube-system                 kube-apiserver-embed-certs-084979             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-embed-certs-084979    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-proxy-lsc2k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-scheduler-embed-certs-084979             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dxs97    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qrrmd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m8s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 2m15s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s              kubelet          Node embed-certs-084979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s              kubelet          Node embed-certs-084979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s              kubelet          Node embed-certs-084979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m10s              node-controller  Node embed-certs-084979 event: Registered Node embed-certs-084979 in Controller
	  Normal  NodeReady                88s                kubelet          Node embed-certs-084979 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)  kubelet          Node embed-certs-084979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)  kubelet          Node embed-certs-084979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 52s)  kubelet          Node embed-certs-084979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-084979 event: Registered Node embed-certs-084979 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd] <==
	{"level":"warn","ts":"2025-11-22T00:33:28.057289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.064780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.074349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.081277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.088466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.094551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.101546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.109310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.115689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.128342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.134510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.141579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.196344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:33:58.287240Z","caller":"traceutil/trace.go:172","msg":"trace[1458323035] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"187.441255ms","start":"2025-11-22T00:33:58.099783Z","end":"2025-11-22T00:33:58.287225Z","steps":["trace[1458323035] 'process raft request'  (duration: 187.339607ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.352428Z","caller":"traceutil/trace.go:172","msg":"trace[118086471] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"144.121329ms","start":"2025-11-22T00:33:59.208290Z","end":"2025-11-22T00:33:59.352412Z","steps":["trace[118086471] 'process raft request'  (duration: 144.009844ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.359329Z","caller":"traceutil/trace.go:172","msg":"trace[554381958] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"150.04287ms","start":"2025-11-22T00:33:59.209270Z","end":"2025-11-22T00:33:59.359313Z","steps":["trace[554381958] 'process raft request'  (duration: 149.997409ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.359549Z","caller":"traceutil/trace.go:172","msg":"trace[34189896] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"151.196788ms","start":"2025-11-22T00:33:59.208335Z","end":"2025-11-22T00:33:59.359532Z","steps":["trace[34189896] 'process raft request'  (duration: 150.843072ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650495Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.76501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:33:59.650568Z","caller":"traceutil/trace.go:172","msg":"trace[456525280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:628; }","duration":"111.843781ms","start":"2025-11-22T00:33:59.538709Z","end":"2025-11-22T00:33:59.650553Z","steps":["trace[456525280] 'range keys from in-memory index tree'  (duration: 111.733817ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650874Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.696531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:33:59.651437Z","caller":"traceutil/trace.go:172","msg":"trace[477154817] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:628; }","duration":"170.878575ms","start":"2025-11-22T00:33:59.480544Z","end":"2025-11-22T00:33:59.651422Z","steps":["trace[477154817] 'agreement among raft nodes before linearized reading'  (duration: 53.2539ms)","trace[477154817] 'range keys from in-memory index tree'  (duration: 116.40279ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.652978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.7226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766331818253184 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" mod_revision:617 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:33:59.653425Z","caller":"traceutil/trace.go:172","msg":"trace[2048328179] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"175.10494ms","start":"2025-11-22T00:33:59.478292Z","end":"2025-11-22T00:33:59.653397Z","steps":["trace[2048328179] 'process raft request'  (duration: 174.756091ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.653620Z","caller":"traceutil/trace.go:172","msg":"trace[1418032813] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"210.916108ms","start":"2025-11-22T00:33:59.442696Z","end":"2025-11-22T00:33:59.653612Z","steps":["trace[1418032813] 'process raft request'  (duration: 91.185453ms)","trace[1418032813] 'compare'  (duration: 116.506012ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:33:59.653512Z","caller":"traceutil/trace.go:172","msg":"trace[1508827890] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"171.058752ms","start":"2025-11-22T00:33:59.482440Z","end":"2025-11-22T00:33:59.653498Z","steps":["trace[1508827890] 'process raft request'  (duration: 170.710138ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:34:17 up  1:16,  0 user,  load average: 4.35, 3.32, 2.08
	Linux embed-certs-084979 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7] <==
	I1122 00:33:29.509362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:29.509580       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1122 00:33:29.509712       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:29.509727       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:29.509746       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:29.806666       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:29.806798       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:29.806815       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:29.806979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:30.207007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:30.207045       1 metrics.go:72] Registering metrics
	I1122 00:33:30.207147       1 controller.go:711] "Syncing nftables rules"
	I1122 00:33:39.807691       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:39.807742       1 main.go:301] handling current node
	I1122 00:33:49.807065       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:49.807104       1 main.go:301] handling current node
	I1122 00:33:59.806649       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:59.806697       1 main.go:301] handling current node
	I1122 00:34:09.807007       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:34:09.807042       1 main.go:301] handling current node
	
	
	==> kube-apiserver [551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39] <==
	I1122 00:33:28.704968       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:28.705979       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:28.704943       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:33:28.704957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:33:28.706646       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:33:28.706682       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:33:28.706707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:33:28.706730       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:33:28.705272       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:33:28.724607       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:28.738106       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:33:28.747356       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:33:28.747736       1 policy_source.go:240] refreshing policies
	I1122 00:33:28.754375       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:29.057853       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:29.092874       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:29.113784       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:29.119748       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:29.127773       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:29.158250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.68.127"}
	I1122 00:33:29.172246       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.125.214"}
	I1122 00:33:29.601714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:32.025299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:32.474460       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:32.525746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e] <==
	I1122 00:33:31.977553       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:33:31.980845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:33:31.983044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:33:31.985270       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:33:31.989570       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:33:31.990828       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:33:31.992344       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:33:31.994153       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:33:32.021694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:32.021710       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:33:32.021795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:33:32.021838       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:33:32.021869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:33:32.021901       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:33:32.021950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:32.021969       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:33:32.022011       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:33:32.023495       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:33:32.023903       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:33:32.028085       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:32.028105       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:32.029599       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:33:32.029870       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:32.031703       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:33:32.039212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f] <==
	I1122 00:33:29.383365       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:29.447772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:29.548890       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:29.548914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1122 00:33:29.548973       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:29.566041       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:29.566114       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:29.570694       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:29.571584       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:29.571625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:29.573490       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:29.573515       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:29.573534       1 config.go:309] "Starting node config controller"
	I1122 00:33:29.573544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:29.573550       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:29.573560       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:29.573540       1 config.go:200] "Starting service config controller"
	I1122 00:33:29.573578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:29.673727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:29.673738       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:33:29.673762       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:29.673755       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11] <==
	I1122 00:33:27.213485       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:33:28.616916       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:33:28.616948       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:33:28.616960       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:33:28.616969       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:33:28.707012       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:28.707041       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:28.710296       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:28.710327       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:28.711964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:28.711969       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:28.811151       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775031     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9hr\" (UniqueName: \"kubernetes.io/projected/fdf6c2d2-5aff-4411-ab7a-2f147e9fc878-kube-api-access-qh9hr\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxs97\" (UID: \"fdf6c2d2-5aff-4411-ab7a-2f147e9fc878\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775102     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0fbb25a-db5f-4d07-9c19-7181a408010c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qrrmd\" (UID: \"e0fbb25a-db5f-4d07-9c19-7181a408010c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775179     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9dm\" (UniqueName: \"kubernetes.io/projected/e0fbb25a-db5f-4d07-9c19-7181a408010c-kube-api-access-6l9dm\") pod \"kubernetes-dashboard-855c9754f9-qrrmd\" (UID: \"e0fbb25a-db5f-4d07-9c19-7181a408010c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775214     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fdf6c2d2-5aff-4411-ab7a-2f147e9fc878-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxs97\" (UID: \"fdf6c2d2-5aff-4411-ab7a-2f147e9fc878\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97"
	Nov 22 00:33:38 embed-certs-084979 kubelet[728]: I1122 00:33:38.496365     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd" podStartSLOduration=2.553343877 podStartE2EDuration="6.496343314s" podCreationTimestamp="2025-11-22 00:33:32 +0000 UTC" firstStartedPulling="2025-11-22 00:33:32.966700794 +0000 UTC m=+7.079716471" lastFinishedPulling="2025-11-22 00:33:36.909700236 +0000 UTC m=+11.022715908" observedRunningTime="2025-11-22 00:33:37.058681754 +0000 UTC m=+11.171697443" watchObservedRunningTime="2025-11-22 00:33:38.496343314 +0000 UTC m=+12.609359002"
	Nov 22 00:33:39 embed-certs-084979 kubelet[728]: I1122 00:33:39.053824     728 scope.go:117] "RemoveContainer" containerID="d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: I1122 00:33:40.057487     728 scope.go:117] "RemoveContainer" containerID="d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: I1122 00:33:40.057617     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: E1122 00:33:40.057801     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:41 embed-certs-084979 kubelet[728]: I1122 00:33:41.061922     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:41 embed-certs-084979 kubelet[728]: E1122 00:33:41.062105     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:47 embed-certs-084979 kubelet[728]: I1122 00:33:47.103652     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:47 embed-certs-084979 kubelet[728]: E1122 00:33:47.103806     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:57 embed-certs-084979 kubelet[728]: I1122 00:33:57.987949     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:59 embed-certs-084979 kubelet[728]: I1122 00:33:59.206574     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podStartSLOduration=21.302679591 podStartE2EDuration="27.206550315s" podCreationTimestamp="2025-11-22 00:33:32 +0000 UTC" firstStartedPulling="2025-11-22 00:33:32.969480706 +0000 UTC m=+7.082496373" lastFinishedPulling="2025-11-22 00:33:38.873351427 +0000 UTC m=+12.986367097" observedRunningTime="2025-11-22 00:33:59.205924593 +0000 UTC m=+33.318940281" watchObservedRunningTime="2025-11-22 00:33:59.206550315 +0000 UTC m=+33.319565987"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.115975     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.116187     728 scope.go:117] "RemoveContainer" containerID="eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: E1122 00:34:00.116405     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.117584     728 scope.go:117] "RemoveContainer" containerID="63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	Nov 22 00:34:07 embed-certs-084979 kubelet[728]: I1122 00:34:07.104133     728 scope.go:117] "RemoveContainer" containerID="eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	Nov 22 00:34:07 embed-certs-084979 kubelet[728]: E1122 00:34:07.104820     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: kubelet.service: Consumed 1.571s CPU time.
	
	
	==> kubernetes-dashboard [0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455] <==
	2025/11/22 00:33:36 Starting overwatch
	2025/11/22 00:33:36 Using namespace: kubernetes-dashboard
	2025/11/22 00:33:36 Using in-cluster config to connect to apiserver
	2025/11/22 00:33:36 Using secret token for csrf signing
	2025/11/22 00:33:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:33:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:33:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:33:36 Generating JWE encryption key
	2025/11/22 00:33:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:33:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:33:37 Initializing JWE encryption key from synchronized object
	2025/11/22 00:33:37 Creating in-cluster Sidecar client
	2025/11/22 00:33:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:33:37 Serving insecurely on HTTP port: 9090
	2025/11/22 00:34:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d] <==
	I1122 00:34:00.172652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:34:00.182202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:34:00.182248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:34:00.184543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:03.640094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:07.901511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:11.499496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:14.553263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.576090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.580653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:17.580823       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:34:17.580979       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a!
	I1122 00:34:17.580980       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85172fda-6e3b-4170-b156-9c1a3f0d4eef", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a became leader
	W1122 00:34:17.582961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.586507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:17.681175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a!
	
	
	==> storage-provisioner [63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da] <==
	I1122 00:33:29.362995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:33:59.364940       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084979 -n embed-certs-084979: exit status 2 (387.25539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-084979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-084979
helpers_test.go:243: (dbg) docker inspect embed-certs-084979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	        "Created": "2025-11-22T00:31:48.222415176Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272166,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:19.763704363Z",
	            "FinishedAt": "2025-11-22T00:33:18.875915274Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/hosts",
	        "LogPath": "/var/lib/docker/containers/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58/e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58-json.log",
	        "Name": "/embed-certs-084979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-084979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-084979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e8d02ad472d1b8f40ae0fd92f7878724d9d1bfd0ed3ab0121e898a6471675d58",
	                "LowerDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d197a9ef188060d5b1f2712126cd316d5f140f04dd173528b8af8e7594d20568/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-084979",
	                "Source": "/var/lib/docker/volumes/embed-certs-084979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-084979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-084979",
	                "name.minikube.sigs.k8s.io": "embed-certs-084979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "00c34a0d6397cc783272ba8c63b628e63a5d89e440a413e263b6077ab7adcaa7",
	            "SandboxKey": "/var/run/docker/netns/00c34a0d6397",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-084979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d41c17c02e28b2753b6d078dd9a412b682778fc89e095be2adad8a79a3a99d8",
	                    "EndpointID": "d0347b03605f5059833eefe2b44e27a910145cfe7bbc3a67bd7e603fbee6f733",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5e:76:28:b2:2e:d6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-084979",
	                        "e8d02ad472d1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979: exit status 2 (374.787511ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-084979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-084979 logs -n 25: (1.271302253s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-983546                                                                                                                                                                                                                          │ no-preload-983546            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:32 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:32 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-084979 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p embed-certs-084979 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-619859                                                                                                                                                                                                                  │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-239758               │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ image   │ embed-certs-084979 image list --format=json                                                                                                                                                                                                   │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p embed-certs-084979 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:34:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:34:00.311386  286707 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:00.311651  286707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:00.311662  286707 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:00.311670  286707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:00.311899  286707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:00.312407  286707 out.go:368] Setting JSON to false
	I1122 00:34:00.313669  286707 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4589,"bootTime":1763767051,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:34:00.313725  286707 start.go:143] virtualization: kvm guest
	I1122 00:34:00.315575  286707 out.go:179] * [kindnet-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:34:00.316959  286707 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:34:00.316989  286707 notify.go:221] Checking for updates...
	I1122 00:34:00.319162  286707 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:34:00.320747  286707 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:00.322032  286707 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:34:00.323281  286707 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:34:00.324325  286707 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:55.874797  280462 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:33:55.879502  280462 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:33:55.880718  280462 api_server.go:141] control plane version: v1.34.1
	I1122 00:33:55.880747  280462 api_server.go:131] duration metric: took 507.052583ms to wait for apiserver health ...
	I1122 00:33:55.880759  280462 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:33:55.884681  280462 system_pods.go:59] 8 kube-system pods found
	I1122 00:33:55.884721  280462 system_pods.go:61] "coredns-66bc5c9577-np5nq" [6bf2527b-42f1-42dd-980e-e1006db2273d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:33:55.884733  280462 system_pods.go:61] "etcd-default-k8s-diff-port-046175" [13c461fc-31bf-48a9-afd5-e9d9d15ed8d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:55.884747  280462 system_pods.go:61] "kindnet-nqk28" [fd6ece46-cf0c-4d24-8859-aaf670c70fb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:55.884753  280462 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-046175" [baf53a5a-35f6-4d69-8adf-62d13c8d4d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:55.884763  280462 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-046175" [12f53e71-5518-4b0b-bcf0-7f99616fcf48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:55.884769  280462 system_pods.go:61] "kube-proxy-jdzcl" [f20a454d-e357-46bf-803b-f0166329db1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:55.884783  280462 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-046175" [042d05f3-1e4f-45d5-abad-8e69368f986c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:55.884788  280462 system_pods.go:61] "storage-provisioner" [5f32ba19-162c-4893-a387-2c8b492c1b6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:33:55.884798  280462 system_pods.go:74] duration metric: took 4.03168ms to wait for pod list to return data ...
	I1122 00:33:55.884808  280462 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:33:55.887037  280462 default_sa.go:45] found service account: "default"
	I1122 00:33:55.887082  280462 default_sa.go:55] duration metric: took 2.266482ms for default service account to be created ...
	I1122 00:33:55.887093  280462 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:33:55.889545  280462 system_pods.go:86] 8 kube-system pods found
	I1122 00:33:55.889575  280462 system_pods.go:89] "coredns-66bc5c9577-np5nq" [6bf2527b-42f1-42dd-980e-e1006db2273d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:33:55.889585  280462 system_pods.go:89] "etcd-default-k8s-diff-port-046175" [13c461fc-31bf-48a9-afd5-e9d9d15ed8d0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:33:55.889602  280462 system_pods.go:89] "kindnet-nqk28" [fd6ece46-cf0c-4d24-8859-aaf670c70fb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:33:55.889610  280462 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-046175" [baf53a5a-35f6-4d69-8adf-62d13c8d4d27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:33:55.889622  280462 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-046175" [12f53e71-5518-4b0b-bcf0-7f99616fcf48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:33:55.889629  280462 system_pods.go:89] "kube-proxy-jdzcl" [f20a454d-e357-46bf-803b-f0166329db1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:33:55.889637  280462 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-046175" [042d05f3-1e4f-45d5-abad-8e69368f986c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:33:55.889644  280462 system_pods.go:89] "storage-provisioner" [5f32ba19-162c-4893-a387-2c8b492c1b6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:33:55.889653  280462 system_pods.go:126] duration metric: took 2.55247ms to wait for k8s-apps to be running ...
	I1122 00:33:55.889662  280462 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:33:55.889708  280462 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:33:55.904654  280462 system_svc.go:56] duration metric: took 14.984652ms WaitForService to wait for kubelet
	I1122 00:33:55.904682  280462 kubeadm.go:587] duration metric: took 3.142754426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:33:55.904704  280462 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:33:55.907554  280462 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:33:55.907591  280462 node_conditions.go:123] node cpu capacity is 8
	I1122 00:33:55.907611  280462 node_conditions.go:105] duration metric: took 2.900646ms to run NodePressure ...
	I1122 00:33:55.907626  280462 start.go:242] waiting for startup goroutines ...
	I1122 00:33:55.907639  280462 start.go:247] waiting for cluster config update ...
	I1122 00:33:55.907657  280462 start.go:256] writing updated cluster config ...
	I1122 00:33:55.907940  280462 ssh_runner.go:195] Run: rm -f paused
	I1122 00:33:55.912033  280462 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:33:55.916393  280462 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-np5nq" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:33:57.924716  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:00.325736  286707 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.325829  286707 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.325918  286707 config.go:182] Loaded profile config "embed-certs-084979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:00.326012  286707 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:34:00.354118  286707 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:34:00.354249  286707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:00.424763  286707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:91 SystemTime:2025-11-22 00:34:00.413840037 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:00.424865  286707 docker.go:319] overlay module found
	I1122 00:34:00.426048  286707 out.go:179] * Using the docker driver based on user configuration
	I1122 00:34:00.427021  286707 start.go:309] selected driver: docker
	I1122 00:34:00.427033  286707 start.go:930] validating driver "docker" against <nil>
	I1122 00:34:00.427043  286707 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:34:00.427823  286707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:00.502816  286707 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-22 00:34:00.481212832 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:00.503015  286707 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:34:00.503286  286707 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:00.508496  286707 out.go:179] * Using Docker driver with root privileges
	I1122 00:34:00.509539  286707 cni.go:84] Creating CNI manager for "kindnet"
	I1122 00:34:00.509559  286707 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:34:00.509640  286707 start.go:353] cluster config:
	{Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:00.511038  286707 out.go:179] * Starting "kindnet-239758" primary control-plane node in "kindnet-239758" cluster
	I1122 00:34:00.512121  286707 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:34:00.513194  286707 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:34:00.514142  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:00.514175  286707 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:34:00.514183  286707 cache.go:65] Caching tarball of preloaded images
	I1122 00:34:00.514237  286707 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:34:00.514283  286707 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:34:00.514299  286707 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:34:00.514419  286707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json ...
	I1122 00:34:00.514447  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json: {Name:mk5be5a31b2aa4847f9905932e541b5e55d80175 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:00.536861  286707 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:34:00.536882  286707 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:34:00.536900  286707 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:34:00.536935  286707 start.go:360] acquireMachinesLock for kindnet-239758: {Name:mkee69b5fdeef63bab530fe4c4745691b367114b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:34:00.537035  286707 start.go:364] duration metric: took 79.354µs to acquireMachinesLock for "kindnet-239758"
	I1122 00:34:00.537088  286707 start.go:93] Provisioning new machine with config: &{Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:00.537184  286707 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:34:00.804735  271909 pod_ready.go:104] pod "coredns-66bc5c9577-jjldt" is not "Ready", error: <nil>
	I1122 00:34:01.303525  271909 pod_ready.go:94] pod "coredns-66bc5c9577-jjldt" is "Ready"
	I1122 00:34:01.303562  271909 pod_ready.go:86] duration metric: took 31.006240985s for pod "coredns-66bc5c9577-jjldt" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.306171  271909 pod_ready.go:83] waiting for pod "etcd-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.311315  271909 pod_ready.go:94] pod "etcd-embed-certs-084979" is "Ready"
	I1122 00:34:01.311339  271909 pod_ready.go:86] duration metric: took 5.144616ms for pod "etcd-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.314302  271909 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.319601  271909 pod_ready.go:94] pod "kube-apiserver-embed-certs-084979" is "Ready"
	I1122 00:34:01.319621  271909 pod_ready.go:86] duration metric: took 5.292789ms for pod "kube-apiserver-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.322868  271909 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.501261  271909 pod_ready.go:94] pod "kube-controller-manager-embed-certs-084979" is "Ready"
	I1122 00:34:01.501303  271909 pod_ready.go:86] duration metric: took 178.409756ms for pod "kube-controller-manager-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:01.701517  271909 pod_ready.go:83] waiting for pod "kube-proxy-lsc2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.101840  271909 pod_ready.go:94] pod "kube-proxy-lsc2k" is "Ready"
	I1122 00:34:02.101874  271909 pod_ready.go:86] duration metric: took 400.326166ms for pod "kube-proxy-lsc2k" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.301925  271909 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.703010  271909 pod_ready.go:94] pod "kube-scheduler-embed-certs-084979" is "Ready"
	I1122 00:34:02.703040  271909 pod_ready.go:86] duration metric: took 401.090623ms for pod "kube-scheduler-embed-certs-084979" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:02.703067  271909 pod_ready.go:40] duration metric: took 32.409029552s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:02.762926  271909 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:02.764590  271909 out.go:179] * Done! kubectl is now configured to use "embed-certs-084979" cluster and "default" namespace by default
	I1122 00:33:59.687346  284750 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.774367656s)
	I1122 00:33:59.687376  284750 kic.go:203] duration metric: took 4.774507848s to extract preloaded images to volume ...
	W1122 00:33:59.687465  284750 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:33:59.687526  284750 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:33:59.687574  284750 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:33:59.745536  284750 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-239758 --name auto-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-239758 --network auto-239758 --ip 192.168.76.2 --volume auto-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:00.337114  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Running}}
	I1122 00:34:00.357568  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.379188  284750 cli_runner.go:164] Run: docker exec auto-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:00.436041  284750 oci.go:144] the created container "auto-239758" has a running status.
	I1122 00:34:00.436083  284750 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa...
	I1122 00:34:00.525833  284750 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:00.556190  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.579712  284750 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:00.579736  284750 kic_runner.go:114] Args: [docker exec --privileged auto-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:00.623465  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:00.648969  284750 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:00.649108  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:00.678962  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:00.679457  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:00.679527  284750 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:00.680453  284750 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55264->127.0.0.1:33103: read: connection reset by peer
	I1122 00:34:03.821813  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-239758
	
	I1122 00:34:03.821881  284750 ubuntu.go:182] provisioning hostname "auto-239758"
	I1122 00:34:03.821976  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:03.843983  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:03.844324  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:03.844352  284750 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-239758 && echo "auto-239758" | sudo tee /etc/hostname
	I1122 00:34:00.539579  286707 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:34:00.539763  286707 start.go:159] libmachine.API.Create for "kindnet-239758" (driver="docker")
	I1122 00:34:00.539790  286707 client.go:173] LocalClient.Create starting
	I1122 00:34:00.539867  286707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:34:00.539906  286707 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:00.539924  286707 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:00.539980  286707 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:34:00.540001  286707 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:00.540034  286707 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:00.540529  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:34:00.560291  286707 cli_runner.go:211] docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:34:00.560406  286707 network_create.go:284] running [docker network inspect kindnet-239758] to gather additional debugging logs...
	I1122 00:34:00.560429  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758
	W1122 00:34:00.580801  286707 cli_runner.go:211] docker network inspect kindnet-239758 returned with exit code 1
	I1122 00:34:00.580832  286707 network_create.go:287] error running [docker network inspect kindnet-239758]: docker network inspect kindnet-239758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-239758 not found
	I1122 00:34:00.580848  286707 network_create.go:289] output of [docker network inspect kindnet-239758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-239758 not found
	
	** /stderr **
	I1122 00:34:00.580991  286707 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:00.603691  286707 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:34:00.604709  286707 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:34:00.605741  286707 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:34:00.606544  286707 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8fcd7657b64b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:ad:c5:eb:8c:57} reservation:<nil>}
	I1122 00:34:00.607337  286707 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-85b8c03d926b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8e:84:e4:fa:a8} reservation:<nil>}
	I1122 00:34:00.608010  286707 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0d41c17c02e2 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:e2:39:6f:dd:b9:0c} reservation:<nil>}
	I1122 00:34:00.609098  286707 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f53530}
	I1122 00:34:00.609132  286707 network_create.go:124] attempt to create docker network kindnet-239758 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1122 00:34:00.609198  286707 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-239758 kindnet-239758
	I1122 00:34:00.674766  286707 network_create.go:108] docker network kindnet-239758 192.168.103.0/24 created
	I1122 00:34:00.674805  286707 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-239758" container
	I1122 00:34:00.674876  286707 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:34:00.699124  286707 cli_runner.go:164] Run: docker volume create kindnet-239758 --label name.minikube.sigs.k8s.io=kindnet-239758 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:34:00.719962  286707 oci.go:103] Successfully created a docker volume kindnet-239758
	I1122 00:34:00.720153  286707 cli_runner.go:164] Run: docker run --rm --name kindnet-239758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-239758 --entrypoint /usr/bin/test -v kindnet-239758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:34:01.176816  286707 oci.go:107] Successfully prepared a docker volume kindnet-239758
	I1122 00:34:01.176887  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:01.176904  286707 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:34:01.176991  286707 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:34:00.422572  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:02.923012  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:04.118824  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-239758
	
	I1122 00:34:04.119003  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.142610  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:04.142962  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:04.142996  284750 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:04.272392  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:04.272425  284750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:04.272451  284750 ubuntu.go:190] setting up certificates
	I1122 00:34:04.272463  284750 provision.go:84] configureAuth start
	I1122 00:34:04.272521  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:04.289928  284750 provision.go:143] copyHostCerts
	I1122 00:34:04.289988  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:04.289996  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:04.292375  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:04.292576  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:04.292611  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:04.292655  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:04.292730  284750 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:04.292739  284750 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:04.292779  284750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:04.292846  284750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.auto-239758 san=[127.0.0.1 192.168.76.2 auto-239758 localhost minikube]
	I1122 00:34:04.406300  284750 provision.go:177] copyRemoteCerts
	I1122 00:34:04.406361  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:04.406419  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.424432  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:04.516478  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:04.565871  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:04.588963  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:34:04.606955  284750 provision.go:87] duration metric: took 334.476703ms to configureAuth
	I1122 00:34:04.606984  284750 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:04.607185  284750 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:04.607340  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:04.625114  284750 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:04.625403  284750 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1122 00:34:04.625423  284750 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:34:05.167489  284750 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:05.167519  284750 machine.go:97] duration metric: took 4.518520944s to provisionDockerMachine
	I1122 00:34:05.167531  284750 client.go:176] duration metric: took 10.981764453s to LocalClient.Create
	I1122 00:34:05.167547  284750 start.go:167] duration metric: took 10.981824149s to libmachine.API.Create "auto-239758"
	I1122 00:34:05.167559  284750 start.go:293] postStartSetup for "auto-239758" (driver="docker")
	I1122 00:34:05.167570  284750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:05.167647  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:05.167687  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.185427  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.372282  284750 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:05.375818  284750 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:05.375843  284750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:05.375853  284750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:05.375911  284750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:05.376006  284750 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:05.376137  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:05.383538  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:05.485785  284750 start.go:296] duration metric: took 318.214083ms for postStartSetup
	I1122 00:34:05.546107  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:05.564097  284750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/config.json ...
	I1122 00:34:05.650901  284750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:05.650979  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.668711  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.755657  284750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:05.759902  284750 start.go:128] duration metric: took 11.576041986s to createHost
	I1122 00:34:05.759926  284750 start.go:83] releasing machines lock for "auto-239758", held for 11.576180726s
	I1122 00:34:05.759987  284750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-239758
	I1122 00:34:05.777243  284750 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:05.777285  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.777325  284750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:05.777424  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:05.796354  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.796714  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:05.881752  284750 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:05.942870  284750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:05.979281  284750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:05.983997  284750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:05.984074  284750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:06.228989  284750 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:06.229018  284750 start.go:496] detecting cgroup driver to use...
	I1122 00:34:06.229048  284750 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:06.229108  284750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:06.244914  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:06.256462  284750 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:06.256571  284750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:06.271664  284750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:06.287714  284750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:06.374112  284750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:06.534222  284750 docker.go:234] disabling docker service ...
	I1122 00:34:06.534301  284750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:06.552211  284750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:06.564349  284750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:06.695008  284750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:06.843695  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:06.856688  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:06.872231  284750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:06.872294  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.882409  284750 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:06.882472  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.891665  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.901265  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.910300  284750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:06.919165  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.927846  284750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.944623  284750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:06.956159  284750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:06.966345  284750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:06.976921  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:07.105566  284750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:34:07.303438  284750 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:07.303512  284750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:07.309697  284750 start.go:564] Will wait 60s for crictl version
	I1122 00:34:07.309759  284750 ssh_runner.go:195] Run: which crictl
	I1122 00:34:07.314462  284750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:07.347108  284750 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:07.347191  284750 ssh_runner.go:195] Run: crio --version
	I1122 00:34:07.389151  284750 ssh_runner.go:195] Run: crio --version
	I1122 00:34:07.434508  284750 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:07.436260  284750 cli_runner.go:164] Run: docker network inspect auto-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:07.462395  284750 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:07.467696  284750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:07.485317  284750 kubeadm.go:884] updating cluster {Name:auto-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:07.485480  284750 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:07.485548  284750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:07.528026  284750 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:07.528076  284750 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:07.528134  284750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:07.574918  284750 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:07.575107  284750 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:07.575137  284750 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:07.575299  284750 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:34:07.575406  284750 ssh_runner.go:195] Run: crio config
	I1122 00:34:07.640432  284750 cni.go:84] Creating CNI manager for ""
	I1122 00:34:07.640487  284750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:34:07.640505  284750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:07.640527  284750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-239758 NodeName:auto-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:07.640655  284750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:07.640709  284750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:07.653367  284750 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:07.653438  284750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:07.666340  284750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1122 00:34:07.687710  284750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:07.711930  284750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1122 00:34:07.732223  284750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:07.736779  284750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:07.750898  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:07.858680  284750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:07.883886  284750 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758 for IP: 192.168.76.2
	I1122 00:34:07.883910  284750 certs.go:195] generating shared ca certs ...
	I1122 00:34:07.883931  284750 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.884352  284750 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:07.884635  284750 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:07.884665  284750 certs.go:257] generating profile certs ...
	I1122 00:34:07.884744  284750 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key
	I1122 00:34:07.884771  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt with IP's: []
	I1122 00:34:07.978603  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt ...
	I1122 00:34:07.978642  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.crt: {Name:mkfc1184f4ba320b02dd5ec6ab99f2616684acae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.978825  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key ...
	I1122 00:34:07.978844  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/client.key: {Name:mkf59416b7d53191b3a67243ba8eb72950bb0642 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:07.978985  284750 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727
	I1122 00:34:07.979011  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:34:08.048292  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 ...
	I1122 00:34:08.048325  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727: {Name:mk215cfd1a9a36c821e4052a239d52967b43892c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.048526  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727 ...
	I1122 00:34:08.048562  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727: {Name:mke5dfec5af6ec29ceb216011f111cb78b25a57b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.048694  284750 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt.90ce4727 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt
	I1122 00:34:08.048808  284750 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key.90ce4727 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key
	I1122 00:34:08.048910  284750 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key
	I1122 00:34:08.048927  284750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt with IP's: []
	I1122 00:34:08.093124  284750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt ...
	I1122 00:34:08.093155  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt: {Name:mk3be27dd950073f5eb01d6f27ac19270180f360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.093403  284750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key ...
	I1122 00:34:08.093425  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key: {Name:mk0fdf50514a0c36cbff6b5580bafb5956031ef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:08.093683  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:08.093732  284750 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:08.093747  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:08.093779  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:08.093842  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:08.093955  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:08.094028  284750 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:08.094841  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:08.117896  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:08.139903  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:08.162632  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:08.181918  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1122 00:34:08.198669  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:08.217021  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:08.237536  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/auto-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:34:08.255084  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:08.274447  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:08.292371  284750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:08.313406  284750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:08.327707  284750 ssh_runner.go:195] Run: openssl version
	I1122 00:34:08.335128  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:08.344923  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.349225  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.349291  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:08.401896  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:08.412848  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:08.423466  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.428042  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.428107  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:08.481001  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:08.492426  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:08.502732  284750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.507224  284750 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.507276  284750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:08.563489  284750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:08.574437  284750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:08.578981  284750 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:08.579043  284750 kubeadm.go:401] StartCluster: {Name:auto-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:08.579161  284750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:08.579238  284750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:08.613589  284750 cri.go:89] found id: ""
	I1122 00:34:08.613653  284750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:08.623321  284750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:08.632641  284750 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:08.632691  284750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:08.642362  284750 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:08.642379  284750 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:08.642419  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:08.651933  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:08.651985  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:08.661872  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:08.671909  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:08.671957  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:08.681346  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:08.691576  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:08.691629  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:08.699968  284750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:08.709825  284750 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:08.709878  284750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:08.719658  284750 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:08.768404  284750 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:08.768481  284750 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:08.795229  284750 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:08.795350  284750 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:08.795418  284750 kubeadm.go:319] OS: Linux
	I1122 00:34:08.795482  284750 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:08.795562  284750 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:08.795639  284750 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:08.795713  284750 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:08.795794  284750 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:08.795882  284750 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:08.795950  284750 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:08.796068  284750 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:08.865279  284750 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:08.865455  284750 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:08.865589  284750 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:08.873154  284750 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:34:08.878355  284750 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:08.878464  284750 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:08.878583  284750 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:06.781846  286707 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.604780911s)
	I1122 00:34:06.781887  286707 kic.go:203] duration metric: took 5.604978509s to extract preloaded images to volume ...
	W1122 00:34:06.782006  286707 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:34:06.782049  286707 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:34:06.782130  286707 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:34:06.842182  286707 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-239758 --name kindnet-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-239758 --network kindnet-239758 --ip 192.168.103.2 --volume kindnet-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:07.204655  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Running}}
	I1122 00:34:07.229271  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.253505  286707 cli_runner.go:164] Run: docker exec kindnet-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:07.310111  286707 oci.go:144] the created container "kindnet-239758" has a running status.
	I1122 00:34:07.310165  286707 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa...
	I1122 00:34:07.541986  286707 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:07.582736  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.610137  286707 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:07.610222  286707 kic_runner.go:114] Args: [docker exec --privileged kindnet-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:07.672689  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:07.702200  286707 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:07.702373  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:07.729237  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:07.729964  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:07.730007  286707 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:07.871626  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-239758
	
	I1122 00:34:07.871670  286707 ubuntu.go:182] provisioning hostname "kindnet-239758"
	I1122 00:34:07.871736  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:07.896204  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:07.896540  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:07.896566  286707 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-239758 && echo "kindnet-239758" | sudo tee /etc/hostname
	I1122 00:34:08.051684  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-239758
	
	I1122 00:34:08.051768  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.073984  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:08.074284  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:08.074330  286707 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:08.213364  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:08.213390  286707 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:08.213422  286707 ubuntu.go:190] setting up certificates
	I1122 00:34:08.213435  286707 provision.go:84] configureAuth start
	I1122 00:34:08.213495  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:08.235768  286707 provision.go:143] copyHostCerts
	I1122 00:34:08.235832  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:08.235841  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:08.235892  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:08.235983  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:08.235993  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:08.236022  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:08.236098  286707 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:08.236109  286707 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:08.236153  286707 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:08.236206  286707 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.kindnet-239758 san=[127.0.0.1 192.168.103.2 kindnet-239758 localhost minikube]
	I1122 00:34:08.447938  286707 provision.go:177] copyRemoteCerts
	I1122 00:34:08.447996  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:08.448043  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.470946  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:08.572892  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:08.599559  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:08.621488  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1122 00:34:08.643004  286707 provision.go:87] duration metric: took 429.558065ms to configureAuth
	I1122 00:34:08.643027  286707 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:08.643359  286707 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:08.643486  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.665788  286707 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:08.666160  286707 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1122 00:34:08.666189  286707 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:34:08.971415  286707 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:08.971444  286707 machine.go:97] duration metric: took 1.269155824s to provisionDockerMachine
	I1122 00:34:08.971460  286707 client.go:176] duration metric: took 8.431659485s to LocalClient.Create
	I1122 00:34:08.971486  286707 start.go:167] duration metric: took 8.431722153s to libmachine.API.Create "kindnet-239758"
	I1122 00:34:08.971502  286707 start.go:293] postStartSetup for "kindnet-239758" (driver="docker")
	I1122 00:34:08.971519  286707 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:08.971614  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:08.971669  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:08.993625  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.085884  286707 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:09.089670  286707 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:09.089702  286707 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:09.089714  286707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:09.089768  286707 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:09.089859  286707 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:09.089976  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:09.097639  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:09.118650  286707 start.go:296] duration metric: took 147.131879ms for postStartSetup
	I1122 00:34:09.119156  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:09.138426  286707 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/config.json ...
	I1122 00:34:09.138673  286707 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:09.138719  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.160473  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.249627  286707 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:09.254093  286707 start.go:128] duration metric: took 8.716893046s to createHost
	I1122 00:34:09.254113  286707 start.go:83] releasing machines lock for "kindnet-239758", held for 8.717064873s
	I1122 00:34:09.254174  286707 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-239758
	I1122 00:34:09.272883  286707 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:09.272928  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.272955  286707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:09.273021  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:09.290441  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.291435  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:09.451506  286707 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:09.457957  286707 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:09.491607  286707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:09.496088  286707 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:09.496151  286707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:09.520321  286707 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:09.520344  286707 start.go:496] detecting cgroup driver to use...
	I1122 00:34:09.520371  286707 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:09.520410  286707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:09.536015  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:09.548165  286707 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:09.548235  286707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:09.566505  286707 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:09.585506  286707 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:09.667413  286707 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:09.756721  286707 docker.go:234] disabling docker service ...
	I1122 00:34:09.756781  286707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:09.774004  286707 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:09.786746  286707 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:09.871875  286707 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:09.960016  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:09.971742  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:09.985150  286707 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:09.985199  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:09.994664  286707 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:09.994717  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.003251  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.011083  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.019408  286707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:10.026820  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.034746  286707 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.048757  286707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:10.058245  286707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:10.066184  286707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:10.073098  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:10.163102  286707 ssh_runner.go:195] Run: sudo systemctl restart crio
	W1122 00:34:05.421490  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:07.426493  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:09.922074  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:10.640673  286707 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:10.640740  286707 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:10.644606  286707 start.go:564] Will wait 60s for crictl version
	I1122 00:34:10.644664  286707 ssh_runner.go:195] Run: which crictl
	I1122 00:34:10.648089  286707 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:10.671508  286707 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:10.671580  286707 ssh_runner.go:195] Run: crio --version
	I1122 00:34:10.698099  286707 ssh_runner.go:195] Run: crio --version
	I1122 00:34:10.725264  286707 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:10.726363  286707 cli_runner.go:164] Run: docker network inspect kindnet-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:10.744402  286707 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:10.748357  286707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:10.758403  286707 kubeadm.go:884] updating cluster {Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:10.758534  286707 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:10.758592  286707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:10.791205  286707 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:10.791224  286707 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:10.791266  286707 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:10.816235  286707 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:10.816252  286707 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:10.816259  286707 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:10.816340  286707 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1122 00:34:10.816401  286707 ssh_runner.go:195] Run: crio config
	I1122 00:34:10.860398  286707 cni.go:84] Creating CNI manager for "kindnet"
	I1122 00:34:10.860429  286707 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:10.860459  286707 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-239758 NodeName:kindnet-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:10.860625  286707 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:10.860702  286707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:10.868685  286707 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:10.868746  286707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:10.876447  286707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1122 00:34:10.889003  286707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:10.904252  286707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1122 00:34:10.917320  286707 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:10.921876  286707 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:10.931993  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:11.011069  286707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:11.033111  286707 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758 for IP: 192.168.103.2
	I1122 00:34:11.033132  286707 certs.go:195] generating shared ca certs ...
	I1122 00:34:11.033150  286707 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.033334  286707 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:11.033402  286707 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:11.033429  286707 certs.go:257] generating profile certs ...
	I1122 00:34:11.033504  286707 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key
	I1122 00:34:11.033527  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt with IP's: []
	I1122 00:34:11.163412  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt ...
	I1122 00:34:11.163439  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.crt: {Name:mk12e35357bc50b638b9d2807f95f0d949aa140f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.163629  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key ...
	I1122 00:34:11.163672  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/client.key: {Name:mk80e4f10e8dfe338873fa0d5bb88cf1cd2ebf1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.163840  286707 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc
	I1122 00:34:11.163871  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1122 00:34:11.229424  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc ...
	I1122 00:34:11.229444  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc: {Name:mk828042cef16f2793302001c0a212c42c1fb697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.229574  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc ...
	I1122 00:34:11.229595  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc: {Name:mk22bb07d09afcea8c9ea84c225ef6ad224c541c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.229708  286707 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt.e8696ebc -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt
	I1122 00:34:11.229804  286707 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key.e8696ebc -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key
	I1122 00:34:11.229884  286707 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key
	I1122 00:34:11.229904  286707 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt with IP's: []
	I1122 00:34:11.275333  286707 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt ...
	I1122 00:34:11.275355  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt: {Name:mk12cd1a58856d0f6c69eb05633c61234555c032 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.275500  286707 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key ...
	I1122 00:34:11.275519  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key: {Name:mk904b4392ca08cd697ca3ff5a09755d4d269881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:11.275723  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:11.275762  286707 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:11.275771  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:11.275805  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:11.275841  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:11.275876  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:11.275942  286707 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:11.276817  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:11.295151  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:11.311357  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:11.327627  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:11.343732  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:34:11.359470  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:11.375586  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:11.392600  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kindnet-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:34:11.408513  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:11.427219  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:11.443637  286707 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:11.459368  286707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:11.470726  286707 ssh_runner.go:195] Run: openssl version
	I1122 00:34:11.476163  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:11.484980  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.488518  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.488568  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:11.522485  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:11.529986  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:11.537735  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.541330  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.541379  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:11.574711  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:11.582469  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:11.589981  286707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.593291  286707 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.593327  286707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:11.626725  286707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:11.634291  286707 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:11.637639  286707 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:11.637698  286707 kubeadm.go:401] StartCluster: {Name:kindnet-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:11.637765  286707 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:11.637797  286707 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:11.665626  286707 cri.go:89] found id: ""
	I1122 00:34:11.665689  286707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:11.674708  286707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:11.683213  286707 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:11.683266  286707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:11.690937  286707 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:11.690951  286707 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:11.690987  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:11.698250  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:11.698300  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:11.704901  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:11.711885  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:11.711922  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:11.718603  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:11.725359  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:11.725406  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:11.732132  286707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:11.738942  286707 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:11.738982  286707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:11.746080  286707 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:11.783345  286707 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:11.783398  286707 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:11.816825  286707 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:11.816907  286707 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:11.816950  286707 kubeadm.go:319] OS: Linux
	I1122 00:34:11.817003  286707 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:11.817100  286707 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:11.817197  286707 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:11.817281  286707 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:11.817372  286707 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:11.817464  286707 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:11.817567  286707 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:11.817640  286707 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:11.877114  286707 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:11.877248  286707 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:11.877387  286707 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:11.884160  286707 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:34:09.133411  284750 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:09.263323  284750 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:09.661009  284750 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:09.894638  284750 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:10.397480  284750 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:10.397622  284750 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-239758 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:34:10.471165  284750 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:10.471352  284750 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-239758 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:34:10.776587  284750 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:11.476006  284750 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:34:11.677481  284750 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:11.677588  284750 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:12.172097  284750 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:12.272173  284750 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:12.868262  284750 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:13.332139  284750 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:34:13.668445  284750 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:13.668950  284750 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:13.672614  284750 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:34:13.673981  284750 out.go:252]   - Booting up control plane ...
	I1122 00:34:13.674112  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:34:13.674213  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:34:13.674784  284750 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:34:13.689077  284750 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:34:13.689224  284750 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:34:13.695478  284750 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:34:13.695777  284750 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:34:13.695863  284750 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:34:13.794118  284750 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:34:13.794291  284750 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:34:11.886792  286707 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:11.886883  286707 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:11.886961  286707 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:12.028427  286707 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:12.128735  286707 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:12.766322  286707 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:12.834781  286707 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:12.907610  286707 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:12.907722  286707 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-239758 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:34:13.114850  286707 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:13.114986  286707 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-239758 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1122 00:34:13.387144  286707 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:13.802354  286707 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:34:13.910427  286707 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:13.910569  286707 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:13.981136  286707 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:14.397041  286707 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:14.873747  286707 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:15.287342  286707 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	W1122 00:34:12.422242  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:14.922910  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:16.007260  286707 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:16.008023  286707 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:16.012589  286707 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:34:14.295373  284750 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.360623ms
	I1122 00:34:14.300839  284750 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:34:14.300989  284750 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 00:34:14.301171  284750 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:34:14.301304  284750 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:34:15.413243  284750 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.112339016s
	I1122 00:34:16.196523  284750 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.895680308s
	I1122 00:34:17.803013  284750 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.502008322s
	I1122 00:34:17.818038  284750 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:34:17.830432  284750 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:34:17.840583  284750 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:34:17.840828  284750 kubeadm.go:319] [mark-control-plane] Marking the node auto-239758 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:34:17.849659  284750 kubeadm.go:319] [bootstrap-token] Using token: gnu25b.maidz9rsb1sn37dm
	I1122 00:34:17.851393  284750 out.go:252]   - Configuring RBAC rules ...
	I1122 00:34:17.851530  284750 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:34:17.854838  284750 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:34:17.860278  284750 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:34:17.862946  284750 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:34:17.866488  284750 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:34:17.870698  284750 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:34:18.208546  284750 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:34:18.628822  284750 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:34:19.211266  284750 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:34:19.212476  284750 kubeadm.go:319] 
	I1122 00:34:19.212567  284750 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:34:19.212572  284750 kubeadm.go:319] 
	I1122 00:34:19.212671  284750 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:34:19.212822  284750 kubeadm.go:319] 
	I1122 00:34:19.212889  284750 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:34:19.213033  284750 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:34:19.213165  284750 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:34:19.213175  284750 kubeadm.go:319] 
	I1122 00:34:19.213247  284750 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:34:19.213252  284750 kubeadm.go:319] 
	I1122 00:34:19.213315  284750 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:34:19.213319  284750 kubeadm.go:319] 
	I1122 00:34:19.213388  284750 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:34:19.213492  284750 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:34:19.213592  284750 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:34:19.213600  284750 kubeadm.go:319] 
	I1122 00:34:19.213726  284750 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:34:19.213834  284750 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:34:19.213843  284750 kubeadm.go:319] 
	I1122 00:34:19.213956  284750 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token gnu25b.maidz9rsb1sn37dm \
	I1122 00:34:19.214115  284750 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 \
	I1122 00:34:19.214145  284750 kubeadm.go:319] 	--control-plane 
	I1122 00:34:19.214158  284750 kubeadm.go:319] 
	I1122 00:34:19.214271  284750 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:34:19.214277  284750 kubeadm.go:319] 
	I1122 00:34:19.214436  284750 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token gnu25b.maidz9rsb1sn37dm \
	I1122 00:34:19.214576  284750 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:b7f0723026bed3ab68bd6fad96097e10a75fbae3d2b8c0df51e0d691a79889b0 
	I1122 00:34:19.217656  284750 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:34:19.217801  284750 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:34:19.217830  284750 cni.go:84] Creating CNI manager for ""
	I1122 00:34:19.217843  284750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:34:19.219535  284750 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 22 00:33:39 embed-certs-084979 crio[569]: time="2025-11-22T00:33:39.831417764Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:33:40 embed-certs-084979 crio[569]: time="2025-11-22T00:33:40.058796619Z" level=info msg="Removing container: d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2" id=ae7c41dc-0020-448c-8b83-3fcda50ed8a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:33:40 embed-certs-084979 crio[569]: time="2025-11-22T00:33:40.101677368Z" level=info msg="Removed container d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=ae7c41dc-0020-448c-8b83-3fcda50ed8a8 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:33:57 embed-certs-084979 crio[569]: time="2025-11-22T00:33:57.988452267Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=bd107cd5-0758-48ef-9a12-12f52c755863 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.024537725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=11f339df-3fef-4ed6-8ea6-a18c0e93490f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.025728245Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=23a9b3f6-6d6f-4d9a-844d-113abeacaa4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.025872047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.061698244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.062331411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.095669963Z" level=info msg="Created container eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=23a9b3f6-6d6f-4d9a-844d-113abeacaa4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.09634497Z" level=info msg="Starting container: eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791" id=e92441e6-e12d-4728-a5f5-1ea1122e27b3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:33:58 embed-certs-084979 crio[569]: time="2025-11-22T00:33:58.098553714Z" level=info msg="Started container" PID=1771 containerID=eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper id=e92441e6-e12d-4728-a5f5-1ea1122e27b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa07ede9d3e9c59e215c3ff077fb908d4a4145e014d55700511881f47ee14512
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.117434749Z" level=info msg="Removing container: 8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f" id=dae9cb3c-70a4-404a-8fae-ba1ec0c8e0f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.117963423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bc35895-41e8-47d4-9b6d-a725f60ff4e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.119605102Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=79524748-1522-43ba-b082-f931d2ba125b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.121098771Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=c225867e-bc05-4dc4-babb-68bf1f8c1a17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.121233564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125619762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125816328Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/280aa55ec0e03cfa840ec220975c34c2ed5ade669cc6796fc11859d513100364/merged/etc/passwd: no such file or directory"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.125850015Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/280aa55ec0e03cfa840ec220975c34c2ed5ade669cc6796fc11859d513100364/merged/etc/group: no such file or directory"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.126210891Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.140949696Z" level=info msg="Removed container 8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97/dashboard-metrics-scraper" id=dae9cb3c-70a4-404a-8fae-ba1ec0c8e0f9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.1569336Z" level=info msg="Created container 214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d: kube-system/storage-provisioner/storage-provisioner" id=c225867e-bc05-4dc4-babb-68bf1f8c1a17 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.157488479Z" level=info msg="Starting container: 214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d" id=9dd925cf-479f-427c-951b-2e5d8b8345de name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:00 embed-certs-084979 crio[569]: time="2025-11-22T00:34:00.159539997Z" level=info msg="Started container" PID=1790 containerID=214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d description=kube-system/storage-provisioner/storage-provisioner id=9dd925cf-479f-427c-951b-2e5d8b8345de name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b132aea60feb510ae85fd376e6dab377b9269f3bfb01b83a6a2133c82a52d54
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	214f0202a39ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   0b132aea60feb       storage-provisioner                          kube-system
	eac069e8ad82b       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   aa07ede9d3e9c       dashboard-metrics-scraper-6ffb444bf9-dxs97   kubernetes-dashboard
	0bc2f72c37d29       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   60aa0f9b1cce5       kubernetes-dashboard-855c9754f9-qrrmd        kubernetes-dashboard
	de7358749b24c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   355e0322d05e0       busybox                                      default
	7a3b2db058ecc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   99c94fd31635f       coredns-66bc5c9577-jjldt                     kube-system
	63a0c0dc4e6cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   0b132aea60feb       storage-provisioner                          kube-system
	b2cdb618d6f51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   37acf4f7f988b       kindnet-57bxk                                kube-system
	168f33d068d77       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   b1b3dec0799ed       kube-proxy-lsc2k                             kube-system
	7a9dde98c18cd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   5c872cbfb36f7       kube-scheduler-embed-certs-084979            kube-system
	e8c7c674c4b54       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   2d3a3322e90a1       kube-controller-manager-embed-certs-084979   kube-system
	b3fad9a866aee       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   03ebd62584363       etcd-embed-certs-084979                      kube-system
	551c0189a8734       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   8c588a2a43268       kube-apiserver-embed-certs-084979            kube-system
	
	
	==> coredns [7a3b2db058ecc0936bd81211047530ef5b9db1b29a2da62db5f78f96fef9818a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52655 - 57844 "HINFO IN 4175754057319742776.6489283951726980942. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.46913766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-084979
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-084979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-084979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_32_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-084979
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:31:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:33:59 +0000   Sat, 22 Nov 2025 00:32:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-084979
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                c16ffbd2-b440-4b5b-8f37-f7fb083b435c
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-jjldt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m12s
	  kube-system                 etcd-embed-certs-084979                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m17s
	  kube-system                 kindnet-57bxk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m12s
	  kube-system                 kube-apiserver-embed-certs-084979             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-embed-certs-084979    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-lsc2k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-embed-certs-084979             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-dxs97    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qrrmd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m10s              kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 2m18s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s              kubelet          Node embed-certs-084979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s              kubelet          Node embed-certs-084979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s              kubelet          Node embed-certs-084979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m13s              node-controller  Node embed-certs-084979 event: Registered Node embed-certs-084979 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-084979 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node embed-certs-084979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node embed-certs-084979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node embed-certs-084979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-084979 event: Registered Node embed-certs-084979 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [b3fad9a866aee07f831f2b8d9504071e3b206772e1161a3e3fa2e5137fe54ecd] <==
	{"level":"warn","ts":"2025-11-22T00:33:28.057289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.064780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.074349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.081277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.088466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.094551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.101546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.109310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.115689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.128342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.134510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.141579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:33:28.196344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:33:58.287240Z","caller":"traceutil/trace.go:172","msg":"trace[1458323035] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"187.441255ms","start":"2025-11-22T00:33:58.099783Z","end":"2025-11-22T00:33:58.287225Z","steps":["trace[1458323035] 'process raft request'  (duration: 187.339607ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.352428Z","caller":"traceutil/trace.go:172","msg":"trace[118086471] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"144.121329ms","start":"2025-11-22T00:33:59.208290Z","end":"2025-11-22T00:33:59.352412Z","steps":["trace[118086471] 'process raft request'  (duration: 144.009844ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.359329Z","caller":"traceutil/trace.go:172","msg":"trace[554381958] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"150.04287ms","start":"2025-11-22T00:33:59.209270Z","end":"2025-11-22T00:33:59.359313Z","steps":["trace[554381958] 'process raft request'  (duration: 149.997409ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.359549Z","caller":"traceutil/trace.go:172","msg":"trace[34189896] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"151.196788ms","start":"2025-11-22T00:33:59.208335Z","end":"2025-11-22T00:33:59.359532Z","steps":["trace[34189896] 'process raft request'  (duration: 150.843072ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650495Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.76501ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:33:59.650568Z","caller":"traceutil/trace.go:172","msg":"trace[456525280] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:628; }","duration":"111.843781ms","start":"2025-11-22T00:33:59.538709Z","end":"2025-11-22T00:33:59.650553Z","steps":["trace[456525280] 'range keys from in-memory index tree'  (duration: 111.733817ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650874Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"169.696531ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:33:59.651437Z","caller":"traceutil/trace.go:172","msg":"trace[477154817] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:628; }","duration":"170.878575ms","start":"2025-11-22T00:33:59.480544Z","end":"2025-11-22T00:33:59.651422Z","steps":["trace[477154817] 'agreement among raft nodes before linearized reading'  (duration: 53.2539ms)","trace[477154817] 'range keys from in-memory index tree'  (duration: 116.40279ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.652978Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"116.7226ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766331818253184 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" mod_revision:617 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ey6jamipqrhivwpu2ro3mnptwm\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:33:59.653425Z","caller":"traceutil/trace.go:172","msg":"trace[2048328179] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"175.10494ms","start":"2025-11-22T00:33:59.478292Z","end":"2025-11-22T00:33:59.653397Z","steps":["trace[2048328179] 'process raft request'  (duration: 174.756091ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.653620Z","caller":"traceutil/trace.go:172","msg":"trace[1418032813] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"210.916108ms","start":"2025-11-22T00:33:59.442696Z","end":"2025-11-22T00:33:59.653612Z","steps":["trace[1418032813] 'process raft request'  (duration: 91.185453ms)","trace[1418032813] 'compare'  (duration: 116.506012ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:33:59.653512Z","caller":"traceutil/trace.go:172","msg":"trace[1508827890] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"171.058752ms","start":"2025-11-22T00:33:59.482440Z","end":"2025-11-22T00:33:59.653498Z","steps":["trace[1508827890] 'process raft request'  (duration: 170.710138ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:34:20 up  1:16,  0 user,  load average: 4.35, 3.32, 2.08
	Linux embed-certs-084979 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b2cdb618d6f5111ef35374169192910ce886543535917970b8758a90f66cbbf7] <==
	I1122 00:33:29.509362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:29.509580       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1122 00:33:29.509712       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:29.509727       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:29.509746       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:29.806666       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:29.806798       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:29.806815       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:29.806979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:30.207007       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:30.207045       1 metrics.go:72] Registering metrics
	I1122 00:33:30.207147       1 controller.go:711] "Syncing nftables rules"
	I1122 00:33:39.807691       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:39.807742       1 main.go:301] handling current node
	I1122 00:33:49.807065       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:49.807104       1 main.go:301] handling current node
	I1122 00:33:59.806649       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:33:59.806697       1 main.go:301] handling current node
	I1122 00:34:09.807007       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:34:09.807042       1 main.go:301] handling current node
	I1122 00:34:19.807378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:34:19.807422       1 main.go:301] handling current node
	
	
	==> kube-apiserver [551c0189a873461b8c5320fb2ea521e29317b304075057684cc2bffd38fa0d39] <==
	I1122 00:33:28.704968       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:28.705979       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:28.704943       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:33:28.704957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:33:28.706646       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:33:28.706682       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:33:28.706707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:33:28.706730       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:33:28.705272       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:33:28.724607       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:28.738106       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:33:28.747356       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:33:28.747736       1 policy_source.go:240] refreshing policies
	I1122 00:33:28.754375       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:29.057853       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:29.092874       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:29.113784       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:29.119748       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:29.127773       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:29.158250       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.68.127"}
	I1122 00:33:29.172246       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.125.214"}
	I1122 00:33:29.601714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:32.025299       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:32.474460       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:32.525746       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e8c7c674c4b5496f49f6a4264627256c21e25a81e0bd0024407bf75f2b148d3e] <==
	I1122 00:33:31.977553       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:33:31.980845       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:33:31.983044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:33:31.985270       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:33:31.989570       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:33:31.990828       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:33:31.992344       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:33:31.994153       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:33:32.021694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:32.021710       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:33:32.021795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:33:32.021838       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:33:32.021869       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:33:32.021901       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:33:32.021950       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:33:32.021969       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:33:32.022011       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:33:32.023495       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:33:32.023903       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:33:32.028085       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:32.028105       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:32.029599       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:33:32.029870       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:32.031703       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:33:32.039212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [168f33d068d777b87d2d6ddd27efae417eae740c606d0d8e6c3e51c038f7784f] <==
	I1122 00:33:29.383365       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:29.447772       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:29.548890       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:29.548914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1122 00:33:29.548973       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:29.566041       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:29.566114       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:29.570694       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:29.571584       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:29.571625       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:29.573490       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:29.573515       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:29.573534       1 config.go:309] "Starting node config controller"
	I1122 00:33:29.573544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:29.573550       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:29.573560       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:29.573540       1 config.go:200] "Starting service config controller"
	I1122 00:33:29.573578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:29.673727       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:29.673738       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:33:29.673762       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:33:29.673755       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7a9dde98c18cd008af1877b7920c71620a86d6002ad73e035d4cfdfd76b47f11] <==
	I1122 00:33:27.213485       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:33:28.616916       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:33:28.616948       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:33:28.616960       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:33:28.616969       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:33:28.707012       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:28.707041       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:28.710296       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:28.710327       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:28.711964       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:28.711969       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:28.811151       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775031     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh9hr\" (UniqueName: \"kubernetes.io/projected/fdf6c2d2-5aff-4411-ab7a-2f147e9fc878-kube-api-access-qh9hr\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxs97\" (UID: \"fdf6c2d2-5aff-4411-ab7a-2f147e9fc878\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775102     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e0fbb25a-db5f-4d07-9c19-7181a408010c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-qrrmd\" (UID: \"e0fbb25a-db5f-4d07-9c19-7181a408010c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775179     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9dm\" (UniqueName: \"kubernetes.io/projected/e0fbb25a-db5f-4d07-9c19-7181a408010c-kube-api-access-6l9dm\") pod \"kubernetes-dashboard-855c9754f9-qrrmd\" (UID: \"e0fbb25a-db5f-4d07-9c19-7181a408010c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd"
	Nov 22 00:33:32 embed-certs-084979 kubelet[728]: I1122 00:33:32.775214     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fdf6c2d2-5aff-4411-ab7a-2f147e9fc878-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-dxs97\" (UID: \"fdf6c2d2-5aff-4411-ab7a-2f147e9fc878\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97"
	Nov 22 00:33:38 embed-certs-084979 kubelet[728]: I1122 00:33:38.496365     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qrrmd" podStartSLOduration=2.553343877 podStartE2EDuration="6.496343314s" podCreationTimestamp="2025-11-22 00:33:32 +0000 UTC" firstStartedPulling="2025-11-22 00:33:32.966700794 +0000 UTC m=+7.079716471" lastFinishedPulling="2025-11-22 00:33:36.909700236 +0000 UTC m=+11.022715908" observedRunningTime="2025-11-22 00:33:37.058681754 +0000 UTC m=+11.171697443" watchObservedRunningTime="2025-11-22 00:33:38.496343314 +0000 UTC m=+12.609359002"
	Nov 22 00:33:39 embed-certs-084979 kubelet[728]: I1122 00:33:39.053824     728 scope.go:117] "RemoveContainer" containerID="d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: I1122 00:33:40.057487     728 scope.go:117] "RemoveContainer" containerID="d74a079781558f1182cd43c01109db894cf380dec8da7b6d01ff36bf58567df2"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: I1122 00:33:40.057617     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:40 embed-certs-084979 kubelet[728]: E1122 00:33:40.057801     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:41 embed-certs-084979 kubelet[728]: I1122 00:33:41.061922     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:41 embed-certs-084979 kubelet[728]: E1122 00:33:41.062105     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:47 embed-certs-084979 kubelet[728]: I1122 00:33:47.103652     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:47 embed-certs-084979 kubelet[728]: E1122 00:33:47.103806     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:33:57 embed-certs-084979 kubelet[728]: I1122 00:33:57.987949     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:33:59 embed-certs-084979 kubelet[728]: I1122 00:33:59.206574     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podStartSLOduration=21.302679591 podStartE2EDuration="27.206550315s" podCreationTimestamp="2025-11-22 00:33:32 +0000 UTC" firstStartedPulling="2025-11-22 00:33:32.969480706 +0000 UTC m=+7.082496373" lastFinishedPulling="2025-11-22 00:33:38.873351427 +0000 UTC m=+12.986367097" observedRunningTime="2025-11-22 00:33:59.205924593 +0000 UTC m=+33.318940281" watchObservedRunningTime="2025-11-22 00:33:59.206550315 +0000 UTC m=+33.319565987"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.115975     728 scope.go:117] "RemoveContainer" containerID="8a153fddc9f29ff204a07153d9c618c6f244d735dfee7fb2149fe2f3cc78e05f"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.116187     728 scope.go:117] "RemoveContainer" containerID="eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: E1122 00:34:00.116405     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:34:00 embed-certs-084979 kubelet[728]: I1122 00:34:00.117584     728 scope.go:117] "RemoveContainer" containerID="63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da"
	Nov 22 00:34:07 embed-certs-084979 kubelet[728]: I1122 00:34:07.104133     728 scope.go:117] "RemoveContainer" containerID="eac069e8ad82b5ef32220afcf3eb8a231f95e7ad1eb61bc623d9fd8633ea1791"
	Nov 22 00:34:07 embed-certs-084979 kubelet[728]: E1122 00:34:07.104820     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-dxs97_kubernetes-dashboard(fdf6c2d2-5aff-4411-ab7a-2f147e9fc878)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-dxs97" podUID="fdf6c2d2-5aff-4411-ab7a-2f147e9fc878"
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:34:14 embed-certs-084979 systemd[1]: kubelet.service: Consumed 1.571s CPU time.
	
	
	==> kubernetes-dashboard [0bc2f72c37d29da0e0ff3321424e7cbbc4286a69d947d0bbd699c20ae15b9455] <==
	2025/11/22 00:33:36 Starting overwatch
	2025/11/22 00:33:36 Using namespace: kubernetes-dashboard
	2025/11/22 00:33:36 Using in-cluster config to connect to apiserver
	2025/11/22 00:33:36 Using secret token for csrf signing
	2025/11/22 00:33:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:33:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:33:36 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:33:36 Generating JWE encryption key
	2025/11/22 00:33:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:33:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:33:37 Initializing JWE encryption key from synchronized object
	2025/11/22 00:33:37 Creating in-cluster Sidecar client
	2025/11/22 00:33:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:33:37 Serving insecurely on HTTP port: 9090
	2025/11/22 00:34:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [214f0202a39acc33ca7e12f9cc9bbe8841fd2892b64ce24bd981f90fcc5f380d] <==
	I1122 00:34:00.172652       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:34:00.182202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:34:00.182248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:34:00.184543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:03.640094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:07.901511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:11.499496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:14.553263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.576090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.580653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:17.580823       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:34:17.580979       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a!
	I1122 00:34:17.580980       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"85172fda-6e3b-4170-b156-9c1a3f0d4eef", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a became leader
	W1122 00:34:17.582961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:17.586507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:17.681175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-084979_d7d010e1-02ed-40d0-bee7-7354b514748a!
	W1122 00:34:19.597701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:19.607503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [63a0c0dc4e6cf5ffc4c266654608cd900faa4d0733422f70622d1222784119da] <==
	I1122 00:33:29.362995       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:33:59.364940       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084979 -n embed-certs-084979
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-084979 -n embed-certs-084979: exit status 2 (338.99453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-084979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-046175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-046175 --alsologtostderr -v=1: exit status 80 (1.627178248s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-046175 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:34:42.080627  297011 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:42.080725  297011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:42.080736  297011 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:42.080743  297011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:42.080951  297011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:42.081174  297011 out.go:368] Setting JSON to false
	I1122 00:34:42.081193  297011 mustload.go:66] Loading cluster: default-k8s-diff-port-046175
	I1122 00:34:42.081574  297011 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:42.081987  297011 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-046175 --format={{.State.Status}}
	I1122 00:34:42.100570  297011 host.go:66] Checking if "default-k8s-diff-port-046175" exists ...
	I1122 00:34:42.100847  297011 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:42.162871  297011 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-22 00:34:42.151608933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:42.163500  297011 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-046175 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:34:42.165315  297011 out.go:179] * Pausing node default-k8s-diff-port-046175 ... 
	I1122 00:34:42.166495  297011 host.go:66] Checking if "default-k8s-diff-port-046175" exists ...
	I1122 00:34:42.166746  297011 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:42.166802  297011 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-046175
	I1122 00:34:42.183419  297011 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/default-k8s-diff-port-046175/id_rsa Username:docker}
	I1122 00:34:42.273222  297011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:42.284858  297011 pause.go:52] kubelet running: true
	I1122 00:34:42.284919  297011 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:42.439537  297011 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:42.439668  297011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:42.507266  297011 cri.go:89] found id: "08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5"
	I1122 00:34:42.507292  297011 cri.go:89] found id: "4b12872c6fa61b798322e32c38f1859a68931ec051534300af7de32a14ecbb1e"
	I1122 00:34:42.507302  297011 cri.go:89] found id: "396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a"
	I1122 00:34:42.507306  297011 cri.go:89] found id: "2293f34669ddac65d895a603acb24bfc4d87bf04bf17a68f75a667e0f0386e29"
	I1122 00:34:42.507309  297011 cri.go:89] found id: "5b02c4f39f6deaa78cd85e6b355b467c645ddb1564142788a9c2995c61b6f880"
	I1122 00:34:42.507312  297011 cri.go:89] found id: "1371a5a17f4e24662cf2becd362174f92c814b7d7c998f6684dc3377977af331"
	I1122 00:34:42.507315  297011 cri.go:89] found id: "baff64b8980c8ff7dd1c7ba87a50d4ea1b4d0bc4551fdda3b346aed0dd0806fc"
	I1122 00:34:42.507318  297011 cri.go:89] found id: "cb4effdd05eb9c31be1bd5e532b9906269e3438992f9777dc396eb3006f69f34"
	I1122 00:34:42.507320  297011 cri.go:89] found id: "c9323b87a3cb9e9f47608ebbfc01d685fde2c082c4217ffeafce458f5e9b9ead"
	I1122 00:34:42.507340  297011 cri.go:89] found id: "945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	I1122 00:34:42.507348  297011 cri.go:89] found id: "e17e5c680ad7a142a3deec04ab3951d68eb8d7e36343494542d0ae2b4b532db6"
	I1122 00:34:42.507352  297011 cri.go:89] found id: ""
	I1122 00:34:42.507399  297011 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:42.518891  297011 retry.go:31] will retry after 227.308268ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:42Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:34:42.747395  297011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:42.762475  297011 pause.go:52] kubelet running: false
	I1122 00:34:42.762522  297011 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:42.910716  297011 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:42.910792  297011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:42.974637  297011 cri.go:89] found id: "08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5"
	I1122 00:34:42.974660  297011 cri.go:89] found id: "4b12872c6fa61b798322e32c38f1859a68931ec051534300af7de32a14ecbb1e"
	I1122 00:34:42.974666  297011 cri.go:89] found id: "396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a"
	I1122 00:34:42.974670  297011 cri.go:89] found id: "2293f34669ddac65d895a603acb24bfc4d87bf04bf17a68f75a667e0f0386e29"
	I1122 00:34:42.974674  297011 cri.go:89] found id: "5b02c4f39f6deaa78cd85e6b355b467c645ddb1564142788a9c2995c61b6f880"
	I1122 00:34:42.974679  297011 cri.go:89] found id: "1371a5a17f4e24662cf2becd362174f92c814b7d7c998f6684dc3377977af331"
	I1122 00:34:42.974683  297011 cri.go:89] found id: "baff64b8980c8ff7dd1c7ba87a50d4ea1b4d0bc4551fdda3b346aed0dd0806fc"
	I1122 00:34:42.974687  297011 cri.go:89] found id: "cb4effdd05eb9c31be1bd5e532b9906269e3438992f9777dc396eb3006f69f34"
	I1122 00:34:42.974691  297011 cri.go:89] found id: "c9323b87a3cb9e9f47608ebbfc01d685fde2c082c4217ffeafce458f5e9b9ead"
	I1122 00:34:42.974699  297011 cri.go:89] found id: "945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	I1122 00:34:42.974704  297011 cri.go:89] found id: "e17e5c680ad7a142a3deec04ab3951d68eb8d7e36343494542d0ae2b4b532db6"
	I1122 00:34:42.974708  297011 cri.go:89] found id: ""
	I1122 00:34:42.974754  297011 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:42.986668  297011 retry.go:31] will retry after 372.522204ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:42Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:34:43.360283  297011 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:43.376088  297011 pause.go:52] kubelet running: false
	I1122 00:34:43.376145  297011 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:34:43.551895  297011 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:34:43.551974  297011 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:34:43.624273  297011 cri.go:89] found id: "08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5"
	I1122 00:34:43.624297  297011 cri.go:89] found id: "4b12872c6fa61b798322e32c38f1859a68931ec051534300af7de32a14ecbb1e"
	I1122 00:34:43.624303  297011 cri.go:89] found id: "396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a"
	I1122 00:34:43.624309  297011 cri.go:89] found id: "2293f34669ddac65d895a603acb24bfc4d87bf04bf17a68f75a667e0f0386e29"
	I1122 00:34:43.624314  297011 cri.go:89] found id: "5b02c4f39f6deaa78cd85e6b355b467c645ddb1564142788a9c2995c61b6f880"
	I1122 00:34:43.624319  297011 cri.go:89] found id: "1371a5a17f4e24662cf2becd362174f92c814b7d7c998f6684dc3377977af331"
	I1122 00:34:43.624324  297011 cri.go:89] found id: "baff64b8980c8ff7dd1c7ba87a50d4ea1b4d0bc4551fdda3b346aed0dd0806fc"
	I1122 00:34:43.624328  297011 cri.go:89] found id: "cb4effdd05eb9c31be1bd5e532b9906269e3438992f9777dc396eb3006f69f34"
	I1122 00:34:43.624332  297011 cri.go:89] found id: "c9323b87a3cb9e9f47608ebbfc01d685fde2c082c4217ffeafce458f5e9b9ead"
	I1122 00:34:43.624348  297011 cri.go:89] found id: "945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	I1122 00:34:43.624353  297011 cri.go:89] found id: "e17e5c680ad7a142a3deec04ab3951d68eb8d7e36343494542d0ae2b4b532db6"
	I1122 00:34:43.624359  297011 cri.go:89] found id: ""
	I1122 00:34:43.624404  297011 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:34:43.638161  297011 out.go:203] 
	W1122 00:34:43.639405  297011 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:34:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:34:43.639432  297011 out.go:285] * 
	* 
	W1122 00:34:43.643642  297011 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:34:43.645424  297011 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-046175 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-046175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-046175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	        "Created": "2025-11-22T00:32:41.655265951Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:45.692662028Z",
	            "FinishedAt": "2025-11-22T00:33:44.719002717Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hostname",
	        "HostsPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hosts",
	        "LogPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e-json.log",
	        "Name": "/default-k8s-diff-port-046175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-046175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-046175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	                "LowerDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-046175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-046175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-046175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "17032fee03037e6a281a212f15946738979f5cee4f39076c21364065665c6b12",
	            "SandboxKey": "/var/run/docker/netns/17032fee0303",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-046175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85b8c03d926ba0e46aa73effaa1a551cb600a9455d371f54191cd0d2f0a6ca5c",
	                    "EndpointID": "e5812ca5f443777c6da26244679cc2fa937ac1da718366c78cbd20c3ca6e437d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:bf:c7:f4:75:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-046175",
	                        "45fe2cf873e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175: exit status 2 (350.522873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25: (1.186533395s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-619859                                                                                                                                                                                                                  │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-239758               │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ image   │ embed-certs-084979 image list --format=json                                                                                                                                                                                                   │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p embed-certs-084979 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ delete  │ -p embed-certs-084979                                                                                                                                                                                                                         │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p embed-certs-084979                                                                                                                                                                                                                         │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p calico-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-239758                │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p auto-239758 pgrep -a kubelet                                                                                                                                                                                                               │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ image   │ default-k8s-diff-port-046175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p default-k8s-diff-port-046175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:34:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:34:24.029676  293341 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:24.029769  293341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:24.029775  293341 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:24.029781  293341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:24.030144  293341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:24.030763  293341 out.go:368] Setting JSON to false
	I1122 00:34:24.032474  293341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4613,"bootTime":1763767051,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:34:24.032550  293341 start.go:143] virtualization: kvm guest
	I1122 00:34:24.034359  293341 out.go:179] * [calico-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:34:24.035702  293341 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:34:24.035713  293341 notify.go:221] Checking for updates...
	I1122 00:34:24.037774  293341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:34:24.038817  293341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:24.039719  293341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:34:24.040772  293341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:34:24.042535  293341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:34:24.044518  293341 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044679  293341 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044815  293341 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044946  293341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:34:24.069089  293341 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:34:24.069181  293341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:24.125503  293341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:34:24.115599092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:24.125637  293341 docker.go:319] overlay module found
	I1122 00:34:24.127692  293341 out.go:179] * Using the docker driver based on user configuration
	I1122 00:34:24.128801  293341 start.go:309] selected driver: docker
	I1122 00:34:24.128821  293341 start.go:930] validating driver "docker" against <nil>
	I1122 00:34:24.128834  293341 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:34:24.129696  293341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:24.194223  293341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:34:24.179749266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:24.194464  293341 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:34:24.194695  293341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:24.195982  293341 out.go:179] * Using Docker driver with root privileges
	I1122 00:34:24.197254  293341 cni.go:84] Creating CNI manager for "calico"
	I1122 00:34:24.197274  293341 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1122 00:34:24.197349  293341 start.go:353] cluster config:
	{Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:24.199459  293341 out.go:179] * Starting "calico-239758" primary control-plane node in "calico-239758" cluster
	I1122 00:34:24.200924  293341 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:34:24.202070  293341 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:34:24.203215  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:24.203263  293341 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:34:24.203274  293341 cache.go:65] Caching tarball of preloaded images
	I1122 00:34:24.203311  293341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:34:24.203367  293341 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:34:24.203386  293341 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:34:24.203489  293341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json ...
	I1122 00:34:24.203511  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json: {Name:mkcc0e4a7ad7f0864284895d8f9334a77f98ed17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.227620  293341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:34:24.227644  293341 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:34:24.227664  293341 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:34:24.227692  293341 start.go:360] acquireMachinesLock for calico-239758: {Name:mk2d48e655754253458a7b803b6f8c2a922a012a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:34:24.227803  293341 start.go:364] duration metric: took 89.565µs to acquireMachinesLock for "calico-239758"
	I1122 00:34:24.227831  293341 start.go:93] Provisioning new machine with config: &{Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:24.227952  293341 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:34:24.154027  284750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.233205  284750 kubeadm.go:1114] duration metric: took 4.699859925s to wait for elevateKubeSystemPrivileges
	I1122 00:34:24.233233  284750 kubeadm.go:403] duration metric: took 15.654196773s to StartCluster
	I1122 00:34:24.233250  284750 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.233326  284750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:24.234466  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.234676  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:34:24.234705  284750 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:24.234765  284750 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:34:24.234856  284750 addons.go:70] Setting storage-provisioner=true in profile "auto-239758"
	I1122 00:34:24.234877  284750 addons.go:239] Setting addon storage-provisioner=true in "auto-239758"
	I1122 00:34:24.234875  284750 addons.go:70] Setting default-storageclass=true in profile "auto-239758"
	I1122 00:34:24.234906  284750 host.go:66] Checking if "auto-239758" exists ...
	I1122 00:34:24.234914  284750 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-239758"
	I1122 00:34:24.234919  284750 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.235291  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.235417  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.236217  284750 out.go:179] * Verifying Kubernetes components...
	I1122 00:34:24.237316  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:24.260569  284750 addons.go:239] Setting addon default-storageclass=true in "auto-239758"
	I1122 00:34:24.260623  284750 host.go:66] Checking if "auto-239758" exists ...
	I1122 00:34:24.261105  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.261428  284750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:34:24.263189  284750 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:24.263209  284750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:34:24.263286  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:24.285205  284750 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:24.285300  284750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:34:24.285502  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:24.299617  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:24.324601  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:24.341787  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:34:24.413286  284750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:24.431211  284750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:24.444933  284750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:24.562896  284750 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:34:24.564067  284750 node_ready.go:35] waiting up to 15m0s for node "auto-239758" to be "Ready" ...
	I1122 00:34:24.807443  284750 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:34:23.108525  286707 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:34:23.112709  286707 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:34:23.112726  286707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:34:23.126810  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:34:23.390159  286707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:34:23.390257  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:23.390346  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-239758 minikube.k8s.io/updated_at=2025_11_22T00_34_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=kindnet-239758 minikube.k8s.io/primary=true
	I1122 00:34:23.471140  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:23.471140  286707 ops.go:34] apiserver oom_adj: -16
	I1122 00:34:23.971355  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.471844  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.971253  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:34:22.421413  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:24.426889  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:25.471548  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:25.972104  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:26.471388  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:26.972175  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:27.471301  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:27.971223  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:28.121161  286707 kubeadm.go:1114] duration metric: took 4.730974449s to wait for elevateKubeSystemPrivileges
	I1122 00:34:28.121203  286707 kubeadm.go:403] duration metric: took 16.483509513s to StartCluster
	I1122 00:34:28.121227  286707 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:28.121311  286707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:28.122523  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:28.221919  286707 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:28.221980  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:34:28.222002  286707 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:34:28.222128  286707 addons.go:70] Setting storage-provisioner=true in profile "kindnet-239758"
	I1122 00:34:28.222142  286707 addons.go:70] Setting default-storageclass=true in profile "kindnet-239758"
	I1122 00:34:28.222172  286707 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-239758"
	I1122 00:34:28.222199  286707 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:28.222148  286707 addons.go:239] Setting addon storage-provisioner=true in "kindnet-239758"
	I1122 00:34:28.222325  286707 host.go:66] Checking if "kindnet-239758" exists ...
	I1122 00:34:28.222579  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.222766  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.248688  286707 out.go:179] * Verifying Kubernetes components...
	I1122 00:34:28.249873  286707 addons.go:239] Setting addon default-storageclass=true in "kindnet-239758"
	I1122 00:34:28.249913  286707 host.go:66] Checking if "kindnet-239758" exists ...
	I1122 00:34:28.250261  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.268247  286707 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:34:24.808457  284750 addons.go:530] duration metric: took 573.689855ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:34:25.068304  284750 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-239758" context rescaled to 1 replicas
	W1122 00:34:26.567768  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:24.229421  293341 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:34:24.229657  293341 start.go:159] libmachine.API.Create for "calico-239758" (driver="docker")
	I1122 00:34:24.229692  293341 client.go:173] LocalClient.Create starting
	I1122 00:34:24.229762  293341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:34:24.229792  293341 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:24.229808  293341 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:24.229856  293341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:34:24.229871  293341 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:24.229882  293341 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:24.230266  293341 cli_runner.go:164] Run: docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:34:24.254291  293341 cli_runner.go:211] docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:34:24.254376  293341 network_create.go:284] running [docker network inspect calico-239758] to gather additional debugging logs...
	I1122 00:34:24.254396  293341 cli_runner.go:164] Run: docker network inspect calico-239758
	W1122 00:34:24.278046  293341 cli_runner.go:211] docker network inspect calico-239758 returned with exit code 1
	I1122 00:34:24.278121  293341 network_create.go:287] error running [docker network inspect calico-239758]: docker network inspect calico-239758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-239758 not found
	I1122 00:34:24.278144  293341 network_create.go:289] output of [docker network inspect calico-239758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-239758 not found
	
	** /stderr **
	I1122 00:34:24.278355  293341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:24.310275  293341 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:34:24.311593  293341 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:34:24.312696  293341 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:34:24.313833  293341 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8fcd7657b64b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:ad:c5:eb:8c:57} reservation:<nil>}
	I1122 00:34:24.314623  293341 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-85b8c03d926b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8e:84:e4:fa:a8} reservation:<nil>}
	I1122 00:34:24.316036  293341 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5ce0}
	I1122 00:34:24.316100  293341 network_create.go:124] attempt to create docker network calico-239758 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:34:24.316201  293341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-239758 calico-239758
	I1122 00:34:24.390762  293341 network_create.go:108] docker network calico-239758 192.168.94.0/24 created
	I1122 00:34:24.390846  293341 kic.go:121] calculated static IP "192.168.94.2" for the "calico-239758" container
	I1122 00:34:24.390926  293341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:34:24.417142  293341 cli_runner.go:164] Run: docker volume create calico-239758 --label name.minikube.sigs.k8s.io=calico-239758 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:34:24.442277  293341 oci.go:103] Successfully created a docker volume calico-239758
	I1122 00:34:24.442404  293341 cli_runner.go:164] Run: docker run --rm --name calico-239758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-239758 --entrypoint /usr/bin/test -v calico-239758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:34:24.929977  293341 oci.go:107] Successfully prepared a docker volume calico-239758
	I1122 00:34:24.930041  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:24.930049  293341 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:34:24.930271  293341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:34:28.268685  286707 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:28.287433  286707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:34:28.287531  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:28.288517  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:28.307274  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:28.310254  286707 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:28.310273  286707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:34:28.310340  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:28.332628  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:28.408908  286707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:28.427568  286707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:28.599857  286707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:28.599878  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:34:29.419500  286707 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1122 00:34:29.420857  286707 node_ready.go:35] waiting up to 15m0s for node "kindnet-239758" to be "Ready" ...
	I1122 00:34:29.421226  286707 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:34:29.422558  286707 addons.go:530] duration metric: took 1.200539506s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:34:29.924087  286707 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-239758" context rescaled to 1 replicas
	W1122 00:34:26.922252  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:28.924510  280462 pod_ready.go:94] pod "coredns-66bc5c9577-np5nq" is "Ready"
	I1122 00:34:28.924547  280462 pod_ready.go:86] duration metric: took 33.008132108s for pod "coredns-66bc5c9577-np5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.927643  280462 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.931333  280462 pod_ready.go:94] pod "etcd-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:28.931356  280462 pod_ready.go:86] duration metric: took 3.689183ms for pod "etcd-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.933253  280462 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.936739  280462 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:28.936759  280462 pod_ready.go:86] duration metric: took 3.488038ms for pod "kube-apiserver-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.938479  280462 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.119994  280462 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:29.120020  280462 pod_ready.go:86] duration metric: took 181.521421ms for pod "kube-controller-manager-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.319674  280462 pod_ready.go:83] waiting for pod "kube-proxy-jdzcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.719578  280462 pod_ready.go:94] pod "kube-proxy-jdzcl" is "Ready"
	I1122 00:34:29.719607  280462 pod_ready.go:86] duration metric: took 399.906376ms for pod "kube-proxy-jdzcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.919871  280462 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:30.319861  280462 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:30.319884  280462 pod_ready.go:86] duration metric: took 399.990303ms for pod "kube-scheduler-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:30.319896  280462 pod_ready.go:40] duration metric: took 34.407818682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:30.364640  280462 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:30.366185  280462 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-046175" cluster and "default" namespace by default
	W1122 00:34:29.066769  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	W1122 00:34:31.066997  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	W1122 00:34:33.067118  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:29.402011  293341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.471697671s)
	I1122 00:34:29.402050  293341 kic.go:203] duration metric: took 4.471995178s to extract preloaded images to volume ...
	W1122 00:34:29.402153  293341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:34:29.402209  293341 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:34:29.402261  293341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:34:29.475735  293341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-239758 --name calico-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-239758 --network calico-239758 --ip 192.168.94.2 --volume calico-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:29.803136  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Running}}
	I1122 00:34:29.822231  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:29.840074  293341 cli_runner.go:164] Run: docker exec calico-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:29.882254  293341 oci.go:144] the created container "calico-239758" has a running status.
	I1122 00:34:29.882291  293341 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa...
	I1122 00:34:30.011074  293341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:30.034955  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:30.057523  293341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:30.057554  293341 kic_runner.go:114] Args: [docker exec --privileged calico-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:30.103556  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:30.128715  293341 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:30.128824  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:30.153454  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:30.153820  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:30.153838  293341 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:30.154638  293341 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34172->127.0.0.1:33113: read: connection reset by peer
	I1122 00:34:33.277873  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-239758
	
	I1122 00:34:33.277899  293341 ubuntu.go:182] provisioning hostname "calico-239758"
	I1122 00:34:33.278221  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.297630  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.297838  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.297851  293341 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-239758 && echo "calico-239758" | sudo tee /etc/hostname
	I1122 00:34:33.425234  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-239758
	
	I1122 00:34:33.425309  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.443338  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.443570  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.443594  293341 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:33.561945  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:33.561974  293341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:33.561996  293341 ubuntu.go:190] setting up certificates
	I1122 00:34:33.562006  293341 provision.go:84] configureAuth start
	I1122 00:34:33.562074  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:33.579857  293341 provision.go:143] copyHostCerts
	I1122 00:34:33.579915  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:33.579925  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:33.580005  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:33.580146  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:33.580159  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:33.580204  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:33.580309  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:33.580319  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:33.580355  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:33.580443  293341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.calico-239758 san=[127.0.0.1 192.168.94.2 calico-239758 localhost minikube]
	I1122 00:34:33.612809  293341 provision.go:177] copyRemoteCerts
	I1122 00:34:33.612853  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:33.612886  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.630182  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:33.718644  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:33.737188  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:34:33.753390  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:33.769831  293341 provision.go:87] duration metric: took 207.815394ms to configureAuth
	I1122 00:34:33.769852  293341 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:33.770010  293341 config.go:182] Loaded profile config "calico-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:33.770140  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.788770  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.788960  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.788976  293341 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1122 00:34:31.423669  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:33.424513  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	I1122 00:34:34.049378  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:34.049418  293341 machine.go:97] duration metric: took 3.920680133s to provisionDockerMachine
	I1122 00:34:34.049432  293341 client.go:176] duration metric: took 9.819735436s to LocalClient.Create
	I1122 00:34:34.049458  293341 start.go:167] duration metric: took 9.81980132s to libmachine.API.Create "calico-239758"
	I1122 00:34:34.049469  293341 start.go:293] postStartSetup for "calico-239758" (driver="docker")
	I1122 00:34:34.049487  293341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:34.049572  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:34.049631  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.068260  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.159774  293341 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:34.163171  293341 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:34.163195  293341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:34.163204  293341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:34.163255  293341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:34.163340  293341 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:34.163431  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:34.170727  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:34.192001  293341 start.go:296] duration metric: took 142.50085ms for postStartSetup
	I1122 00:34:34.192444  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:34.212037  293341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json ...
	I1122 00:34:34.212352  293341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:34.212415  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.232097  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.318858  293341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:34.323290  293341 start.go:128] duration metric: took 10.095316664s to createHost
	I1122 00:34:34.323314  293341 start.go:83] releasing machines lock for "calico-239758", held for 10.095497901s
	I1122 00:34:34.323385  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:34.341009  293341 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:34.341082  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.341129  293341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:34.341224  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.359899  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.360216  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.498553  293341 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:34.504767  293341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:34.538134  293341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:34.542736  293341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:34.542796  293341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:34.568405  293341 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:34.568428  293341 start.go:496] detecting cgroup driver to use...
	I1122 00:34:34.568459  293341 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:34.568511  293341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:34.583711  293341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:34.595905  293341 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:34.595949  293341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:34.611386  293341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:34.628586  293341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:34.708629  293341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:34.791696  293341 docker.go:234] disabling docker service ...
	I1122 00:34:34.791754  293341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:34.809790  293341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:34.822252  293341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:34.903170  293341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:34.984772  293341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:34.996404  293341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:35.010309  293341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:35.010356  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.020022  293341 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:35.020090  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.028457  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.036764  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.044839  293341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:35.052729  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.060762  293341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.073830  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.081844  293341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:35.088699  293341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:35.095439  293341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:35.174703  293341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:34:35.322843  293341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:35.322913  293341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:35.327415  293341 start.go:564] Will wait 60s for crictl version
	I1122 00:34:35.327483  293341 ssh_runner.go:195] Run: which crictl
	I1122 00:34:35.331606  293341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:35.358695  293341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:35.358768  293341 ssh_runner.go:195] Run: crio --version
	I1122 00:34:35.385499  293341 ssh_runner.go:195] Run: crio --version
	I1122 00:34:35.414828  293341 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:35.415835  293341 cli_runner.go:164] Run: docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:35.433625  293341 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:35.437798  293341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:35.447876  293341 kubeadm.go:884] updating cluster {Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:35.447990  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:35.448035  293341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:35.477983  293341 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:35.478001  293341 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:35.478039  293341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:35.503124  293341 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:35.503152  293341 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:35.503161  293341 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:35.503280  293341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1122 00:34:35.503364  293341 ssh_runner.go:195] Run: crio config
	I1122 00:34:35.549133  293341 cni.go:84] Creating CNI manager for "calico"
	I1122 00:34:35.549162  293341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:35.549182  293341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-239758 NodeName:calico-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:35.549320  293341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:35.549381  293341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:35.557192  293341 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:35.557259  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:35.564633  293341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:34:35.577920  293341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:35.594673  293341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1122 00:34:35.606786  293341 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:35.610092  293341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:35.619306  293341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:35.726930  293341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:35.754442  293341 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758 for IP: 192.168.94.2
	I1122 00:34:35.754461  293341 certs.go:195] generating shared ca certs ...
	I1122 00:34:35.754477  293341 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.754616  293341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:35.754681  293341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:35.754701  293341 certs.go:257] generating profile certs ...
	I1122 00:34:35.754757  293341 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key
	I1122 00:34:35.754770  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt with IP's: []
	I1122 00:34:35.886588  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt ...
	I1122 00:34:35.886617  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt: {Name:mk80e4b50b13640dbfceb4aa8fb1a864e3e757e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.886834  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key ...
	I1122 00:34:35.886858  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key: {Name:mkdf0ff1ca86a4b6ec7b3c7adc9b549b600dc7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.886990  293341 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359
	I1122 00:34:35.887021  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:34:35.993993  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 ...
	I1122 00:34:35.994013  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359: {Name:mk0bde71006e042423914c1492c118b912220f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.994149  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359 ...
	I1122 00:34:35.994169  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359: {Name:mk05575741a3c7f4a59cea7e3dc3511ae6d16893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.994241  293341 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt
	I1122 00:34:35.994338  293341 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key
	I1122 00:34:35.994400  293341 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key
	I1122 00:34:35.994415  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt with IP's: []
	I1122 00:34:36.082678  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt ...
	I1122 00:34:36.082700  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt: {Name:mk86a3c8d72154b79102df46e6429c52c7f40731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:36.082821  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key ...
	I1122 00:34:36.082833  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key: {Name:mka473f85754bb562c81cf79cb8010217c954ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:36.083005  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:36.083075  293341 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:36.083089  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:36.083124  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:36.083148  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:36.083171  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:36.083224  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:36.083788  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:36.101590  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:36.118269  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:36.134848  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:36.153038  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:34:36.169210  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:36.185465  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:36.201634  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:34:36.218111  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:36.235963  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:36.252698  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:36.270282  293341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:36.282104  293341 ssh_runner.go:195] Run: openssl version
	I1122 00:34:36.287743  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:36.295537  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.298840  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.298889  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.332698  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:36.340811  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:36.348629  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.351912  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.351953  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.387926  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:36.395767  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:36.403503  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.407040  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.407093  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.441785  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:36.449576  293341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:36.452741  293341 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:36.452803  293341 kubeadm.go:401] StartCluster: {Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:36.452900  293341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:36.452967  293341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:36.478379  293341 cri.go:89] found id: ""
	I1122 00:34:36.478436  293341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:36.485826  293341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:36.493286  293341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:36.493337  293341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:36.500466  293341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:36.500482  293341 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:36.500511  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:36.507594  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:36.507631  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:36.514397  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:36.521886  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:36.521927  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:36.528989  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:36.536061  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:36.536108  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:36.542926  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:36.550313  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:36.550359  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:36.557152  293341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:36.595862  293341 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:36.595928  293341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:36.635422  293341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:36.635525  293341 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:36.635575  293341 kubeadm.go:319] OS: Linux
	I1122 00:34:36.635636  293341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:36.635696  293341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:36.635755  293341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:36.635816  293341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:36.635876  293341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:36.635939  293341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:36.635999  293341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:36.636085  293341 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:36.700817  293341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:36.700979  293341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:36.701167  293341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:36.707734  293341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1122 00:34:35.067941  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:35.566670  284750 node_ready.go:49] node "auto-239758" is "Ready"
	I1122 00:34:35.566700  284750 node_ready.go:38] duration metric: took 11.002603065s for node "auto-239758" to be "Ready" ...
	I1122 00:34:35.566717  284750 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:34:35.566765  284750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:34:35.579021  284750 api_server.go:72] duration metric: took 11.344274813s to wait for apiserver process to appear ...
	I1122 00:34:35.579047  284750 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:34:35.579081  284750 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:34:35.583653  284750 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:34:35.584652  284750 api_server.go:141] control plane version: v1.34.1
	I1122 00:34:35.584682  284750 api_server.go:131] duration metric: took 5.617194ms to wait for apiserver health ...
	I1122 00:34:35.584693  284750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:34:35.587721  284750 system_pods.go:59] 8 kube-system pods found
	I1122 00:34:35.587755  284750 system_pods.go:61] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.587764  284750 system_pods.go:61] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.587772  284750 system_pods.go:61] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.587778  284750 system_pods.go:61] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.587784  284750 system_pods.go:61] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.587794  284750 system_pods.go:61] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.587801  284750 system_pods.go:61] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.587818  284750 system_pods.go:61] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.587830  284750 system_pods.go:74] duration metric: took 3.128863ms to wait for pod list to return data ...
	I1122 00:34:35.587843  284750 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:34:35.589977  284750 default_sa.go:45] found service account: "default"
	I1122 00:34:35.589998  284750 default_sa.go:55] duration metric: took 2.145397ms for default service account to be created ...
	I1122 00:34:35.590008  284750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:34:35.592483  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:35.592516  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.592525  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.592533  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.592540  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.592547  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.592556  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.592562  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.592573  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.592605  284750 retry.go:31] will retry after 212.155507ms: missing components: kube-dns
	I1122 00:34:35.808598  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:35.808633  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.808642  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.808650  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.808655  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.808660  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.808666  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.808672  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.808687  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.808722  284750 retry.go:31] will retry after 316.247782ms: missing components: kube-dns
	I1122 00:34:36.128295  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:36.128322  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:36.128328  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:36.128334  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:36.128337  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:36.128342  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:36.128352  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:36.128355  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:36.128360  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:36.128381  284750 retry.go:31] will retry after 480.759917ms: missing components: kube-dns
	I1122 00:34:36.613336  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:36.613384  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Running
	I1122 00:34:36.613394  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:36.613399  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:36.613404  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:36.613409  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:36.613414  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:36.613429  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:36.613433  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Running
	I1122 00:34:36.613445  284750 system_pods.go:126] duration metric: took 1.023428657s to wait for k8s-apps to be running ...
	I1122 00:34:36.613459  284750 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:34:36.613511  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:36.629356  284750 system_svc.go:56] duration metric: took 15.886879ms WaitForService to wait for kubelet
	I1122 00:34:36.629385  284750 kubeadm.go:587] duration metric: took 12.394643418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:36.629417  284750 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:34:36.631913  284750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:34:36.631945  284750 node_conditions.go:123] node cpu capacity is 8
	I1122 00:34:36.631966  284750 node_conditions.go:105] duration metric: took 2.54234ms to run NodePressure ...
	I1122 00:34:36.631982  284750 start.go:242] waiting for startup goroutines ...
	I1122 00:34:36.631996  284750 start.go:247] waiting for cluster config update ...
	I1122 00:34:36.632045  284750 start.go:256] writing updated cluster config ...
	I1122 00:34:36.632401  284750 ssh_runner.go:195] Run: rm -f paused
	I1122 00:34:36.637690  284750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:36.644921  284750 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hlldw" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.648633  284750 pod_ready.go:94] pod "coredns-66bc5c9577-hlldw" is "Ready"
	I1122 00:34:36.648653  284750 pod_ready.go:86] duration metric: took 3.712061ms for pod "coredns-66bc5c9577-hlldw" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.650732  284750 pod_ready.go:83] waiting for pod "etcd-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.654884  284750 pod_ready.go:94] pod "etcd-auto-239758" is "Ready"
	I1122 00:34:36.654906  284750 pod_ready.go:86] duration metric: took 4.153267ms for pod "etcd-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.656957  284750 pod_ready.go:83] waiting for pod "kube-apiserver-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.660945  284750 pod_ready.go:94] pod "kube-apiserver-auto-239758" is "Ready"
	I1122 00:34:36.660966  284750 pod_ready.go:86] duration metric: took 3.989133ms for pod "kube-apiserver-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.662739  284750 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.042137  284750 pod_ready.go:94] pod "kube-controller-manager-auto-239758" is "Ready"
	I1122 00:34:37.042169  284750 pod_ready.go:86] duration metric: took 379.411313ms for pod "kube-controller-manager-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.241844  284750 pod_ready.go:83] waiting for pod "kube-proxy-ttj9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.641584  284750 pod_ready.go:94] pod "kube-proxy-ttj9r" is "Ready"
	I1122 00:34:37.641609  284750 pod_ready.go:86] duration metric: took 399.739742ms for pod "kube-proxy-ttj9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.841579  284750 pod_ready.go:83] waiting for pod "kube-scheduler-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:38.241976  284750 pod_ready.go:94] pod "kube-scheduler-auto-239758" is "Ready"
	I1122 00:34:38.242003  284750 pod_ready.go:86] duration metric: took 400.400799ms for pod "kube-scheduler-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:38.242016  284750 pod_ready.go:40] duration metric: took 1.604299507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:38.290285  284750 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:38.292160  284750 out.go:179] * Done! kubectl is now configured to use "auto-239758" cluster and "default" namespace by default
	I1122 00:34:36.710354  293341 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:36.710444  293341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:36.710541  293341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:37.440660  293341 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:37.653686  293341 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:37.775780  293341 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:37.989687  293341 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:38.225366  293341 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:38.225528  293341 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-239758 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:34:38.380447  293341 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:38.380700  293341 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-239758 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:34:38.602224  293341 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:38.695190  293341 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1122 00:34:35.923399  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:37.924356  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:39.924626  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	I1122 00:34:39.366011  293341 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:39.366204  293341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:39.425435  293341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:40.102322  293341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:40.443471  293341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:41.377319  293341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:34:41.601945  293341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:41.603321  293341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:41.608221  293341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:34:40.424419  286707 node_ready.go:49] node "kindnet-239758" is "Ready"
	I1122 00:34:40.424450  286707 node_ready.go:38] duration metric: took 11.003562111s for node "kindnet-239758" to be "Ready" ...
	I1122 00:34:40.424469  286707 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:34:40.424541  286707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:34:40.440607  286707 api_server.go:72] duration metric: took 12.218636595s to wait for apiserver process to appear ...
	I1122 00:34:40.440638  286707 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:34:40.440683  286707 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:34:40.446081  286707 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:34:40.447220  286707 api_server.go:141] control plane version: v1.34.1
	I1122 00:34:40.447250  286707 api_server.go:131] duration metric: took 6.583507ms to wait for apiserver health ...
	I1122 00:34:40.447262  286707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:34:40.450923  286707 system_pods.go:59] 8 kube-system pods found
	I1122 00:34:40.450957  286707 system_pods.go:61] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.450964  286707 system_pods.go:61] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.450972  286707 system_pods.go:61] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.450977  286707 system_pods.go:61] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.450983  286707 system_pods.go:61] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.450988  286707 system_pods.go:61] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.450993  286707 system_pods.go:61] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.451003  286707 system_pods.go:61] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.451010  286707 system_pods.go:74] duration metric: took 3.742031ms to wait for pod list to return data ...
	I1122 00:34:40.451028  286707 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:34:40.453677  286707 default_sa.go:45] found service account: "default"
	I1122 00:34:40.453699  286707 default_sa.go:55] duration metric: took 2.660289ms for default service account to be created ...
	I1122 00:34:40.453709  286707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:34:40.457123  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.457152  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.457166  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.457174  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.457179  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.457193  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.457198  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.457203  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.457210  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.457237  286707 retry.go:31] will retry after 213.581768ms: missing components: kube-dns
	I1122 00:34:40.675160  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.675191  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.675203  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.675210  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.675213  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.675216  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.675219  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.675222  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.675227  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.675241  286707 retry.go:31] will retry after 234.94544ms: missing components: kube-dns
	I1122 00:34:40.914103  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.914146  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.914154  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.914162  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.914168  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.914173  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.914181  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.914186  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.914197  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.914215  286707 retry.go:31] will retry after 338.264832ms: missing components: kube-dns
	I1122 00:34:41.256177  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:41.256208  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:41.256214  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:41.256224  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:41.256231  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:41.256235  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:41.256239  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:41.256250  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:41.256258  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:41.256281  286707 retry.go:31] will retry after 464.101326ms: missing components: kube-dns
	I1122 00:34:41.724785  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:41.724810  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Running
	I1122 00:34:41.724816  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:41.724820  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:41.724823  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:41.724833  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:41.724837  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:41.724843  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:41.724846  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Running
	I1122 00:34:41.724855  286707 system_pods.go:126] duration metric: took 1.27114076s to wait for k8s-apps to be running ...
	I1122 00:34:41.724866  286707 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:34:41.724904  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:41.739239  286707 system_svc.go:56] duration metric: took 14.363847ms WaitForService to wait for kubelet
	I1122 00:34:41.739269  286707 kubeadm.go:587] duration metric: took 13.517315904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:41.739292  286707 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:34:41.742308  286707 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:34:41.742341  286707 node_conditions.go:123] node cpu capacity is 8
	I1122 00:34:41.742362  286707 node_conditions.go:105] duration metric: took 3.063934ms to run NodePressure ...
	I1122 00:34:41.742378  286707 start.go:242] waiting for startup goroutines ...
	I1122 00:34:41.742387  286707 start.go:247] waiting for cluster config update ...
	I1122 00:34:41.742402  286707 start.go:256] writing updated cluster config ...
	I1122 00:34:41.742717  286707 ssh_runner.go:195] Run: rm -f paused
	I1122 00:34:41.746785  286707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:41.750779  286707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5n5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.755881  286707 pod_ready.go:94] pod "coredns-66bc5c9577-5n5ck" is "Ready"
	I1122 00:34:41.755905  286707 pod_ready.go:86] duration metric: took 5.102628ms for pod "coredns-66bc5c9577-5n5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.758038  286707 pod_ready.go:83] waiting for pod "etcd-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.762310  286707 pod_ready.go:94] pod "etcd-kindnet-239758" is "Ready"
	I1122 00:34:41.762331  286707 pod_ready.go:86] duration metric: took 4.246089ms for pod "etcd-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.764042  286707 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.768498  286707 pod_ready.go:94] pod "kube-apiserver-kindnet-239758" is "Ready"
	I1122 00:34:41.768516  286707 pod_ready.go:86] duration metric: took 4.438824ms for pod "kube-apiserver-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.771804  286707 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.150993  286707 pod_ready.go:94] pod "kube-controller-manager-kindnet-239758" is "Ready"
	I1122 00:34:42.151028  286707 pod_ready.go:86] duration metric: took 379.198583ms for pod "kube-controller-manager-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.352340  286707 pod_ready.go:83] waiting for pod "kube-proxy-5k9bx" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.751466  286707 pod_ready.go:94] pod "kube-proxy-5k9bx" is "Ready"
	I1122 00:34:42.751497  286707 pod_ready.go:86] duration metric: took 399.128098ms for pod "kube-proxy-5k9bx" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.951897  286707 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:43.351409  286707 pod_ready.go:94] pod "kube-scheduler-kindnet-239758" is "Ready"
	I1122 00:34:43.351485  286707 pod_ready.go:86] duration metric: took 399.555519ms for pod "kube-scheduler-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:43.351520  286707 pod_ready.go:40] duration metric: took 1.604703325s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:43.410915  286707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:43.412496  286707 out.go:179] * Done! kubectl is now configured to use "kindnet-239758" cluster and "default" namespace by default
	I1122 00:34:41.609750  293341 out.go:252]   - Booting up control plane ...
	I1122 00:34:41.609968  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:34:41.610096  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:34:41.610670  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:34:41.626130  293341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:34:41.626257  293341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:34:41.632958  293341 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:34:41.633179  293341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:34:41.633223  293341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:34:41.731919  293341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:34:41.732105  293341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:34:43.233251  293341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501329873s
	I1122 00:34:43.235879  293341 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:34:43.235997  293341 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1122 00:34:43.236113  293341 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:34:43.236211  293341 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 22 00:34:10 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:10.192476757Z" level=info msg="Started container" PID=1750 containerID=140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper id=474e8dd0-1bf4-4608-a140-7e5efbd8a3b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab47096307824e8a49d25f1f5a8eb219fb566d254ada11368edfffe29e2ffe0a
	Nov 22 00:34:11 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:11.146647175Z" level=info msg="Removing container: ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b" id=31adfd02-58c2-4bf0-bea4-c79bc08c4fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:11 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:11.156601505Z" level=info msg="Removed container ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=31adfd02-58c2-4bf0-bea4-c79bc08c4fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.188715445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eaa3b52f-901a-41ef-83b5-8d55e4d97f58 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.189721885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=62277b35-c742-450e-ab16-d19ee7449571 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.190837876Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=edd7088c-6ffd-4e60-9f40-9f97b51c82b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.19096898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195676026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195866367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9181f580e500ae9bb731d0e316ba45a88c9ec0518242cd824456bbf78b6e8fce/merged/etc/passwd: no such file or directory"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195905406Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9181f580e500ae9bb731d0e316ba45a88c9ec0518242cd824456bbf78b6e8fce/merged/etc/group: no such file or directory"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.196214818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.236244835Z" level=info msg="Created container 08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5: kube-system/storage-provisioner/storage-provisioner" id=edd7088c-6ffd-4e60-9f40-9f97b51c82b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.236855049Z" level=info msg="Starting container: 08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5" id=e7ac1253-908b-4bdd-a295-856e54a5033d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.23887148Z" level=info msg="Started container" PID=1764 containerID=08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5 description=kube-system/storage-provisioner/storage-provisioner id=e7ac1253-908b-4bdd-a295-856e54a5033d name=/runtime.v1.RuntimeService/StartContainer sandboxID=60cd7259cfcbecd267e86ad47a79f4ac693579e1fe824abcf8f3dfce50edca9f
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.059976401Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eed000c3-0321-4a7a-aa04-400a4bd61bc2 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.063044121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ed6fb4c-1db6-4e66-8db6-5b84bf9b6d82 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.064043247Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=395d8622-7112-454a-9247-5b1a73c33e43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.064202829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.070727586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.071293727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.103330012Z" level=info msg="Created container 945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=395d8622-7112-454a-9247-5b1a73c33e43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.103810963Z" level=info msg="Starting container: 945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6" id=8a35759d-c490-4a85-8cc1-98e788540d19 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.105636975Z" level=info msg="Started container" PID=1799 containerID=945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper id=8a35759d-c490-4a85-8cc1-98e788540d19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab47096307824e8a49d25f1f5a8eb219fb566d254ada11368edfffe29e2ffe0a
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.209962183Z" level=info msg="Removing container: 140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca" id=2f22353e-73dd-47de-ac1c-6e695f5dfbbe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.218859032Z" level=info msg="Removed container 140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=2f22353e-73dd-47de-ac1c-6e695f5dfbbe name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	945b05c05cd00       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   ab47096307824       dashboard-metrics-scraper-6ffb444bf9-ljm4t             kubernetes-dashboard
	08df4c4e3b8f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   60cd7259cfcbe       storage-provisioner                                    kube-system
	e17e5c680ad7a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   e5cd0086b3132       kubernetes-dashboard-855c9754f9-jqktd                  kubernetes-dashboard
	4b12872c6fa61       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   395a10ea1877b       coredns-66bc5c9577-np5nq                               kube-system
	033fbd2f51fa0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   1daed0982a61d       busybox                                                default
	396b80189fa57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   60cd7259cfcbe       storage-provisioner                                    kube-system
	2293f34669dda       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   57ca72b7429be       kindnet-nqk28                                          kube-system
	5b02c4f39f6de       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   be9c9ca5ae8e8       kube-proxy-jdzcl                                       kube-system
	1371a5a17f4e2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           52 seconds ago      Running             kube-scheduler              0                   57a696312afe7       kube-scheduler-default-k8s-diff-port-046175            kube-system
	baff64b8980c8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           52 seconds ago      Running             kube-apiserver              0                   fed6a5cb5b9ac       kube-apiserver-default-k8s-diff-port-046175            kube-system
	cb4effdd05eb9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           52 seconds ago      Running             etcd                        0                   2d7363f9b64f0       etcd-default-k8s-diff-port-046175                      kube-system
	c9323b87a3cb9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           52 seconds ago      Running             kube-controller-manager     0                   49a6381d6f23c       kube-controller-manager-default-k8s-diff-port-046175   kube-system
	
	
	==> coredns [4b12872c6fa61b798322e32c38f1859a68931ec051534300af7de32a14ecbb1e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46370 - 45686 "HINFO IN 1010609819208698072.3699696803867637871. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108359996s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-046175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-046175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-046175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-046175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:33:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-046175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                cef49250-3102-457d-90bd-87a6df160389
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-np5nq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     100s
	  kube-system                 etcd-default-k8s-diff-port-046175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         107s
	  kube-system                 kindnet-nqk28                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-046175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-046175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-jdzcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-046175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ljm4t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jqktd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 100s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x8 over 111s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x8 over 111s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    106s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  106s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     106s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s                 node-controller  Node default-k8s-diff-port-046175 event: Registered Node default-k8s-diff-port-046175 in Controller
	  Normal  NodeReady                89s                  kubelet          Node default-k8s-diff-port-046175 status is now: NodeReady
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node default-k8s-diff-port-046175 event: Registered Node default-k8s-diff-port-046175 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [cb4effdd05eb9c31be1bd5e532b9906269e3438992f9777dc396eb3006f69f34] <==
	{"level":"warn","ts":"2025-11-22T00:33:59.332331Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:33:59.014838Z","time spent":"317.457542ms","remote":"127.0.0.1:55038","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4725,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:525 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:4649 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >"}
	{"level":"info","ts":"2025-11-22T00:33:59.533909Z","caller":"traceutil/trace.go:172","msg":"trace[1664692026] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:576; }","duration":"144.231686ms","start":"2025-11-22T00:33:59.389654Z","end":"2025-11-22T00:33:59.533886Z","steps":["trace[1664692026] 'read index received'  (duration: 144.223897ms)","trace[1664692026] 'applied index is now lower than readState.Index'  (duration: 6.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.648936Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.252845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t\" limit:1 ","response":"range_response_count:1 size:2792"}
	{"level":"info","ts":"2025-11-22T00:33:59.649306Z","caller":"traceutil/trace.go:172","msg":"trace[1633187650] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t; range_end:; response_count:1; response_revision:545; }","duration":"259.63624ms","start":"2025-11-22T00:33:59.389649Z","end":"2025-11-22T00:33:59.649286Z","steps":["trace[1633187650] 'agreement among raft nodes before linearized reading'  (duration: 144.335651ms)","trace[1633187650] 'range keys from in-memory index tree'  (duration: 114.86853ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.649445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.445067ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221124013874 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:542 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:33:59.650067Z","caller":"traceutil/trace.go:172","msg":"trace[394589799] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:576; }","duration":"116.073918ms","start":"2025-11-22T00:33:59.533962Z","end":"2025-11-22T00:33:59.650036Z","steps":["trace[394589799] 'read index received'  (duration: 29.208µs)","trace[394589799] 'applied index is now lower than readState.Index'  (duration: 116.043068ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.650274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.942899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-11-22T00:33:59.650308Z","caller":"traceutil/trace.go:172","msg":"trace[1320213160] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:546; }","duration":"184.984412ms","start":"2025-11-22T00:33:59.465315Z","end":"2025-11-22T00:33:59.650299Z","steps":["trace[1320213160] 'agreement among raft nodes before linearized reading'  (duration: 184.823539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.814036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-np5nq\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-22T00:33:59.650403Z","caller":"traceutil/trace.go:172","msg":"trace[818869516] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-np5nq; range_end:; response_count:1; response_revision:546; }","duration":"231.863623ms","start":"2025-11-22T00:33:59.418531Z","end":"2025-11-22T00:33:59.650395Z","steps":["trace[818869516] 'agreement among raft nodes before linearized reading'  (duration: 231.733953ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.650453Z","caller":"traceutil/trace.go:172","msg":"trace[14681837] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"282.408395ms","start":"2025-11-22T00:33:59.368026Z","end":"2025-11-22T00:33:59.650435Z","steps":["trace[14681837] 'process raft request'  (duration: 165.909065ms)","trace[14681837] 'compare'  (duration: 115.36055ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.650615Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.326015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-11-22T00:33:59.650652Z","caller":"traceutil/trace.go:172","msg":"trace[501672378] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:546; }","duration":"185.365839ms","start":"2025-11-22T00:33:59.465277Z","end":"2025-11-22T00:33:59.650643Z","steps":["trace[501672378] 'agreement among raft nodes before linearized reading'  (duration: 185.254681ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:34:05.133908Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.996988ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221124013982 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" mod_revision:479 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:34:05.134090Z","caller":"traceutil/trace.go:172","msg":"trace[51142223] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"141.905815ms","start":"2025-11-22T00:34:04.992165Z","end":"2025-11-22T00:34:05.134071Z","steps":["trace[51142223] 'process raft request'  (duration: 10.674159ms)","trace[51142223] 'compare'  (duration: 130.889113ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:06.063397Z","caller":"traceutil/trace.go:172","msg":"trace[1060462428] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"106.148564ms","start":"2025-11-22T00:34:05.957234Z","end":"2025-11-22T00:34:06.063383Z","steps":["trace[1060462428] 'process raft request'  (duration: 106.034297ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:27.300767Z","caller":"traceutil/trace.go:172","msg":"trace[157611854] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"101.112516ms","start":"2025-11-22T00:34:27.199638Z","end":"2025-11-22T00:34:27.300751Z","steps":["trace[157611854] 'process raft request'  (duration: 100.934094ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.527832Z","caller":"traceutil/trace.go:172","msg":"trace[1348268258] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:691; }","duration":"109.274775ms","start":"2025-11-22T00:34:28.418532Z","end":"2025-11-22T00:34:28.527807Z","steps":["trace[1348268258] 'read index received'  (duration: 109.262365ms)","trace[1348268258] 'applied index is now lower than readState.Index'  (duration: 11.244µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:28.527975Z","caller":"traceutil/trace.go:172","msg":"trace[1277067968] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"132.105256ms","start":"2025-11-22T00:34:28.395855Z","end":"2025-11-22T00:34:28.527960Z","steps":["trace[1277067968] 'process raft request'  (duration: 131.959753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:34:28.528081Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.501471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-np5nq\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-22T00:34:28.528150Z","caller":"traceutil/trace.go:172","msg":"trace[447318893] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-np5nq; range_end:; response_count:1; response_revision:654; }","duration":"109.61571ms","start":"2025-11-22T00:34:28.418521Z","end":"2025-11-22T00:34:28.528137Z","steps":["trace[447318893] 'agreement among raft nodes before linearized reading'  (duration: 109.361827ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.685179Z","caller":"traceutil/trace.go:172","msg":"trace[738216789] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"152.239389ms","start":"2025-11-22T00:34:28.532925Z","end":"2025-11-22T00:34:28.685165Z","steps":["trace[738216789] 'process raft request'  (duration: 152.199943ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.685227Z","caller":"traceutil/trace.go:172","msg":"trace[1750452361] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"152.281617ms","start":"2025-11-22T00:34:28.532923Z","end":"2025-11-22T00:34:28.685205Z","steps":["trace[1750452361] 'process raft request'  (duration: 105.100712ms)","trace[1750452361] 'compare'  (duration: 46.962336ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:28.692414Z","caller":"traceutil/trace.go:172","msg":"trace[1901711117] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"157.799685ms","start":"2025-11-22T00:34:28.534598Z","end":"2025-11-22T00:34:28.692398Z","steps":["trace[1901711117] 'process raft request'  (duration: 157.668707ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.883279Z","caller":"traceutil/trace.go:172","msg":"trace[1125045978] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"181.523907ms","start":"2025-11-22T00:34:28.701733Z","end":"2025-11-22T00:34:28.883256Z","steps":["trace[1125045978] 'process raft request'  (duration: 137.757615ms)","trace[1125045978] 'compare'  (duration: 43.642908ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:34:44 up  1:17,  0 user,  load average: 4.11, 3.35, 2.12
	Linux default-k8s-diff-port-046175 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2293f34669ddac65d895a603acb24bfc4d87bf04bf17a68f75a667e0f0386e29] <==
	I1122 00:33:55.592728       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:55.686139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:33:55.686439       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:55.686465       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:55.686481       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:55.987779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:55.987814       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:55.987829       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:55.987979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:56.288356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:56.288395       1 metrics.go:72] Registering metrics
	I1122 00:33:56.288524       1 controller.go:711] "Syncing nftables rules"
	I1122 00:34:05.897185       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:05.897275       1 main.go:301] handling current node
	I1122 00:34:15.897247       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:15.897291       1 main.go:301] handling current node
	I1122 00:34:25.897446       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:25.897499       1 main.go:301] handling current node
	I1122 00:34:35.897669       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:35.897699       1 main.go:301] handling current node
	
	
	==> kube-apiserver [baff64b8980c8ff7dd1c7ba87a50d4ea1b4d0bc4551fdda3b346aed0dd0806fc] <==
	I1122 00:33:54.649026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:33:54.647672       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:33:54.647684       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:33:54.647724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:33:54.650797       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:33:54.657572       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:54.668377       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1122 00:33:54.674265       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:33:54.679536       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:33:54.679567       1 policy_source.go:240] refreshing policies
	I1122 00:33:54.722520       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:54.722585       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:54.735814       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:55.139323       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:55.146789       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:55.188684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:55.213481       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:55.220918       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:55.273433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.63.65"}
	I1122 00:33:55.287717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.224.254"}
	I1122 00:33:55.558771       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:58.393225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:58.522402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:58.523155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:58.523155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c9323b87a3cb9e9f47608ebbfc01d685fde2c082c4217ffeafce458f5e9b9ead] <==
	I1122 00:33:57.966791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:33:57.990107       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:33:57.990117       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:33:57.990226       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:33:57.990530       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:33:57.990559       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:33:57.991445       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:33:57.991540       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:33:57.991638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-046175"
	I1122 00:33:57.991682       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:33:57.994693       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:33:57.995853       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:33:57.995947       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:33:57.996009       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:33:57.996018       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:33:57.996025       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:33:57.996962       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:57.999229       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:33:58.001488       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:33:58.002641       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:58.003726       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:33:58.004907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:33:58.007144       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:33:58.011330       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:58.015626       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5b02c4f39f6deaa78cd85e6b355b467c645ddb1564142788a9c2995c61b6f880] <==
	I1122 00:33:55.492792       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:55.569464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:55.670323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:55.670442       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:33:55.670597       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:55.697883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:55.697953       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:55.703766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:55.704237       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:55.704268       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:55.708138       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:55.709810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:55.708334       1 config.go:200] "Starting service config controller"
	I1122 00:33:55.709846       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:55.708361       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:55.709859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:55.710953       1 config.go:309] "Starting node config controller"
	I1122 00:33:55.715597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:55.715950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:55.809969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:33:55.810263       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:33:55.810284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1371a5a17f4e24662cf2becd362174f92c814b7d7c998f6684dc3377977af331] <==
	I1122 00:33:53.395812       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:33:54.676279       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:54.677340       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:54.694229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:54.694389       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:33:54.694417       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:33:54.694466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:54.698047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:54.698106       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:54.698129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.698137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.794910       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:33:54.799149       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.799219       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:58 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:58.386176     736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362570     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7691f68-4748-403c-b999-decb49f55769-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jqktd\" (UID: \"c7691f68-4748-403c-b999-decb49f55769\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362639     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhq49\" (UniqueName: \"kubernetes.io/projected/c7691f68-4748-403c-b999-decb49f55769-kube-api-access-lhq49\") pod \"kubernetes-dashboard-855c9754f9-jqktd\" (UID: \"c7691f68-4748-403c-b999-decb49f55769\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362667     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpsp\" (UniqueName: \"kubernetes.io/projected/81212010-4d9f-4af8-8e8b-c43717e014b7-kube-api-access-4gpsp\") pod \"dashboard-metrics-scraper-6ffb444bf9-ljm4t\" (UID: \"81212010-4d9f-4af8-8e8b-c43717e014b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362743     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81212010-4d9f-4af8-8e8b-c43717e014b7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ljm4t\" (UID: \"81212010-4d9f-4af8-8e8b-c43717e014b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t"
	Nov 22 00:34:09 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:09.795036     736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd" podStartSLOduration=5.820636155 podStartE2EDuration="11.795013675s" podCreationTimestamp="2025-11-22 00:33:58 +0000 UTC" firstStartedPulling="2025-11-22 00:33:59.980251611 +0000 UTC m=+8.016771000" lastFinishedPulling="2025-11-22 00:34:05.954629139 +0000 UTC m=+13.991148520" observedRunningTime="2025-11-22 00:34:07.154610729 +0000 UTC m=+15.191130118" watchObservedRunningTime="2025-11-22 00:34:09.795013675 +0000 UTC m=+17.831533067"
	Nov 22 00:34:10 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:10.140886     736 scope.go:117] "RemoveContainer" containerID="ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:11.145245     736 scope.go:117] "RemoveContainer" containerID="ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:11.145614     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:11.145804     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:12 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:12.148214     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:12 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:12.148392     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:20 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:20.294359     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:20 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:20.294584     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:26 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:26.188299     736 scope.go:117] "RemoveContainer" containerID="396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.059475     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.208782     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.208962     736 scope.go:117] "RemoveContainer" containerID="945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:33.209182     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:40 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:40.294472     736 scope.go:117] "RemoveContainer" containerID="945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	Nov 22 00:34:40 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:40.294701     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: kubelet.service: Consumed 1.592s CPU time.
	
	
	==> kubernetes-dashboard [e17e5c680ad7a142a3deec04ab3951d68eb8d7e36343494542d0ae2b4b532db6] <==
	2025/11/22 00:34:06 Starting overwatch
	2025/11/22 00:34:06 Using namespace: kubernetes-dashboard
	2025/11/22 00:34:06 Using in-cluster config to connect to apiserver
	2025/11/22 00:34:06 Using secret token for csrf signing
	2025/11/22 00:34:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:34:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:34:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:34:06 Generating JWE encryption key
	2025/11/22 00:34:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:34:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:34:06 Initializing JWE encryption key from synchronized object
	2025/11/22 00:34:06 Creating in-cluster Sidecar client
	2025/11/22 00:34:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:34:06 Serving insecurely on HTTP port: 9090
	2025/11/22 00:34:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5] <==
	I1122 00:34:26.252326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:34:26.261377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:34:26.261425       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:34:26.263440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:29.719346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:33.979394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:37.578015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:40.631132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.653668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.660828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:43.660992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:34:43.661171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f83cc8c2-c96e-4e26-a62a-1dc1f2279333", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1 became leader
	I1122 00:34:43.661226       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1!
	W1122 00:34:43.667228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.670647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:43.761929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1!
	
	
	==> storage-provisioner [396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a] <==
	I1122 00:33:55.452300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:34:25.456114       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175: exit status 2 (391.142776ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-046175
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-046175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	        "Created": "2025-11-22T00:32:41.655265951Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:33:45.692662028Z",
	            "FinishedAt": "2025-11-22T00:33:44.719002717Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hostname",
	        "HostsPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/hosts",
	        "LogPath": "/var/lib/docker/containers/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e/45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e-json.log",
	        "Name": "/default-k8s-diff-port-046175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-046175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-046175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45fe2cf873e1459f1b2e2590379fa9c19405304d5d34fbc56c223b1a0d28973e",
	                "LowerDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8-init/diff:/var/lib/docker/overlay2/ba9391341a6f38c98c330a240b44a37e901d779a7c95e15c141c59c46e09c348/diff",
	                "MergedDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/565c4e45a52dea4a903d13848e091e336b960e3c2e8c160cc7329c79f51cfcf8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-046175",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-046175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-046175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-046175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "17032fee03037e6a281a212f15946738979f5cee4f39076c21364065665c6b12",
	            "SandboxKey": "/var/run/docker/netns/17032fee0303",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-046175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85b8c03d926ba0e46aa73effaa1a551cb600a9455d371f54191cd0d2f0a6ca5c",
	                    "EndpointID": "e5812ca5f443777c6da26244679cc2fa937ac1da718366c78cbd20c3ca6e437d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:bf:c7:f4:75:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-046175",
	                        "45fe2cf873e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175: exit status 2 (311.724415ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-046175 logs -n 25: (1.142119905s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p newest-cni-531189 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p newest-cni-531189 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-046175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-046175 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ addons  │ enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ image   │ newest-cni-531189 image list --format=json                                                                                                                                                                                                    │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ pause   │ -p newest-cni-531189 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │                     │
	│ start   │ -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ delete  │ -p newest-cni-531189                                                                                                                                                                                                                          │ newest-cni-531189            │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:33 UTC │
	│ start   │ -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-619859                                                                                                                                                                                                                  │ kubernetes-upgrade-619859    │ jenkins │ v1.37.0 │ 22 Nov 25 00:33 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-239758               │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ image   │ embed-certs-084979 image list --format=json                                                                                                                                                                                                   │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p embed-certs-084979 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ delete  │ -p embed-certs-084979                                                                                                                                                                                                                         │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p embed-certs-084979                                                                                                                                                                                                                         │ embed-certs-084979           │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p calico-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-239758                │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p auto-239758 pgrep -a kubelet                                                                                                                                                                                                               │ auto-239758                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ image   │ default-k8s-diff-port-046175 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ pause   │ -p default-k8s-diff-port-046175 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-046175 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:34:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:34:24.029676  293341 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:34:24.029769  293341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:24.029775  293341 out.go:374] Setting ErrFile to fd 2...
	I1122 00:34:24.029781  293341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:34:24.030144  293341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:34:24.030763  293341 out.go:368] Setting JSON to false
	I1122 00:34:24.032474  293341 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4613,"bootTime":1763767051,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:34:24.032550  293341 start.go:143] virtualization: kvm guest
	I1122 00:34:24.034359  293341 out.go:179] * [calico-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:34:24.035702  293341 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:34:24.035713  293341 notify.go:221] Checking for updates...
	I1122 00:34:24.037774  293341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:34:24.038817  293341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:24.039719  293341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:34:24.040772  293341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:34:24.042535  293341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:34:24.044518  293341 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044679  293341 config.go:182] Loaded profile config "default-k8s-diff-port-046175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044815  293341 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.044946  293341 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:34:24.069089  293341 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:34:24.069181  293341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:24.125503  293341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:34:24.115599092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:24.125637  293341 docker.go:319] overlay module found
	I1122 00:34:24.127692  293341 out.go:179] * Using the docker driver based on user configuration
	I1122 00:34:24.128801  293341 start.go:309] selected driver: docker
	I1122 00:34:24.128821  293341 start.go:930] validating driver "docker" against <nil>
	I1122 00:34:24.128834  293341 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:34:24.129696  293341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:34:24.194223  293341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:34:24.179749266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:34:24.194464  293341 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:34:24.194695  293341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:24.195982  293341 out.go:179] * Using Docker driver with root privileges
	I1122 00:34:24.197254  293341 cni.go:84] Creating CNI manager for "calico"
	I1122 00:34:24.197274  293341 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1122 00:34:24.197349  293341 start.go:353] cluster config:
	{Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:24.199459  293341 out.go:179] * Starting "calico-239758" primary control-plane node in "calico-239758" cluster
	I1122 00:34:24.200924  293341 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:34:24.202070  293341 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:34:24.203215  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:24.203263  293341 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1122 00:34:24.203274  293341 cache.go:65] Caching tarball of preloaded images
	I1122 00:34:24.203311  293341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:34:24.203367  293341 preload.go:238] Found /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1122 00:34:24.203386  293341 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:34:24.203489  293341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json ...
	I1122 00:34:24.203511  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json: {Name:mkcc0e4a7ad7f0864284895d8f9334a77f98ed17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.227620  293341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:34:24.227644  293341 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:34:24.227664  293341 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:34:24.227692  293341 start.go:360] acquireMachinesLock for calico-239758: {Name:mk2d48e655754253458a7b803b6f8c2a922a012a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:34:24.227803  293341 start.go:364] duration metric: took 89.565µs to acquireMachinesLock for "calico-239758"
	I1122 00:34:24.227831  293341 start.go:93] Provisioning new machine with config: &{Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:24.227952  293341 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:34:24.154027  284750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.233205  284750 kubeadm.go:1114] duration metric: took 4.699859925s to wait for elevateKubeSystemPrivileges
	I1122 00:34:24.233233  284750 kubeadm.go:403] duration metric: took 15.654196773s to StartCluster
	I1122 00:34:24.233250  284750 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.233326  284750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:24.234466  284750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:24.234676  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:34:24.234705  284750 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:24.234765  284750 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:34:24.234856  284750 addons.go:70] Setting storage-provisioner=true in profile "auto-239758"
	I1122 00:34:24.234877  284750 addons.go:239] Setting addon storage-provisioner=true in "auto-239758"
	I1122 00:34:24.234875  284750 addons.go:70] Setting default-storageclass=true in profile "auto-239758"
	I1122 00:34:24.234906  284750 host.go:66] Checking if "auto-239758" exists ...
	I1122 00:34:24.234914  284750 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-239758"
	I1122 00:34:24.234919  284750 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:24.235291  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.235417  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.236217  284750 out.go:179] * Verifying Kubernetes components...
	I1122 00:34:24.237316  284750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:24.260569  284750 addons.go:239] Setting addon default-storageclass=true in "auto-239758"
	I1122 00:34:24.260623  284750 host.go:66] Checking if "auto-239758" exists ...
	I1122 00:34:24.261105  284750 cli_runner.go:164] Run: docker container inspect auto-239758 --format={{.State.Status}}
	I1122 00:34:24.261428  284750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:34:24.263189  284750 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:24.263209  284750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:34:24.263286  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:24.285205  284750 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:24.285300  284750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:34:24.285502  284750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-239758
	I1122 00:34:24.299617  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:24.324601  284750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/auto-239758/id_rsa Username:docker}
	I1122 00:34:24.341787  284750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:34:24.413286  284750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:24.431211  284750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:24.444933  284750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:24.562896  284750 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:34:24.564067  284750 node_ready.go:35] waiting up to 15m0s for node "auto-239758" to be "Ready" ...
	I1122 00:34:24.807443  284750 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:34:23.108525  286707 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:34:23.112709  286707 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:34:23.112726  286707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:34:23.126810  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:34:23.390159  286707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:34:23.390257  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:23.390346  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-239758 minikube.k8s.io/updated_at=2025_11_22T00_34_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=kindnet-239758 minikube.k8s.io/primary=true
	I1122 00:34:23.471140  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:23.471140  286707 ops.go:34] apiserver oom_adj: -16
	I1122 00:34:23.971355  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.471844  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:24.971253  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:34:22.421413  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	W1122 00:34:24.426889  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:25.471548  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:25.972104  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:26.471388  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:26.972175  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:27.471301  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:27.971223  286707 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:34:28.121161  286707 kubeadm.go:1114] duration metric: took 4.730974449s to wait for elevateKubeSystemPrivileges
	I1122 00:34:28.121203  286707 kubeadm.go:403] duration metric: took 16.483509513s to StartCluster
	I1122 00:34:28.121227  286707 settings.go:142] acquiring lock: {Name:mk281bec5fc7c41e6f3fe8d6a1502b13a2db8fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:28.121311  286707 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:34:28.122523  286707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/kubeconfig: {Name:mk85f0b9d89ff824568ecbebf0d1111042a6bb93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:28.221919  286707 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:34:28.221980  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:34:28.222002  286707 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:34:28.222128  286707 addons.go:70] Setting storage-provisioner=true in profile "kindnet-239758"
	I1122 00:34:28.222142  286707 addons.go:70] Setting default-storageclass=true in profile "kindnet-239758"
	I1122 00:34:28.222172  286707 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-239758"
	I1122 00:34:28.222199  286707 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:28.222148  286707 addons.go:239] Setting addon storage-provisioner=true in "kindnet-239758"
	I1122 00:34:28.222325  286707 host.go:66] Checking if "kindnet-239758" exists ...
	I1122 00:34:28.222579  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.222766  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.248688  286707 out.go:179] * Verifying Kubernetes components...
	I1122 00:34:28.249873  286707 addons.go:239] Setting addon default-storageclass=true in "kindnet-239758"
	I1122 00:34:28.249913  286707 host.go:66] Checking if "kindnet-239758" exists ...
	I1122 00:34:28.250261  286707 cli_runner.go:164] Run: docker container inspect kindnet-239758 --format={{.State.Status}}
	I1122 00:34:28.268247  286707 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:34:24.808457  284750 addons.go:530] duration metric: took 573.689855ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:34:25.068304  284750 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-239758" context rescaled to 1 replicas
	W1122 00:34:26.567768  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:24.229421  293341 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:34:24.229657  293341 start.go:159] libmachine.API.Create for "calico-239758" (driver="docker")
	I1122 00:34:24.229692  293341 client.go:173] LocalClient.Create starting
	I1122 00:34:24.229762  293341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem
	I1122 00:34:24.229792  293341 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:24.229808  293341 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:24.229856  293341 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem
	I1122 00:34:24.229871  293341 main.go:143] libmachine: Decoding PEM data...
	I1122 00:34:24.229882  293341 main.go:143] libmachine: Parsing certificate...
	I1122 00:34:24.230266  293341 cli_runner.go:164] Run: docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:34:24.254291  293341 cli_runner.go:211] docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:34:24.254376  293341 network_create.go:284] running [docker network inspect calico-239758] to gather additional debugging logs...
	I1122 00:34:24.254396  293341 cli_runner.go:164] Run: docker network inspect calico-239758
	W1122 00:34:24.278046  293341 cli_runner.go:211] docker network inspect calico-239758 returned with exit code 1
	I1122 00:34:24.278121  293341 network_create.go:287] error running [docker network inspect calico-239758]: docker network inspect calico-239758: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-239758 not found
	I1122 00:34:24.278144  293341 network_create.go:289] output of [docker network inspect calico-239758]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-239758 not found
	
	** /stderr **
	I1122 00:34:24.278355  293341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:24.310275  293341 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
	I1122 00:34:24.311593  293341 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb90361b5ee3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:6a:cd:d3:0b:da:39} reservation:<nil>}
	I1122 00:34:24.312696  293341 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8fa9d9e8b5ac IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:02:4b:3f:d0:70} reservation:<nil>}
	I1122 00:34:24.313833  293341 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8fcd7657b64b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:ad:c5:eb:8c:57} reservation:<nil>}
	I1122 00:34:24.314623  293341 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-85b8c03d926b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:de:8e:84:e4:fa:a8} reservation:<nil>}
	I1122 00:34:24.316036  293341 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5ce0}
	I1122 00:34:24.316100  293341 network_create.go:124] attempt to create docker network calico-239758 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:34:24.316201  293341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-239758 calico-239758
	I1122 00:34:24.390762  293341 network_create.go:108] docker network calico-239758 192.168.94.0/24 created
	I1122 00:34:24.390846  293341 kic.go:121] calculated static IP "192.168.94.2" for the "calico-239758" container
	I1122 00:34:24.390926  293341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:34:24.417142  293341 cli_runner.go:164] Run: docker volume create calico-239758 --label name.minikube.sigs.k8s.io=calico-239758 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:34:24.442277  293341 oci.go:103] Successfully created a docker volume calico-239758
	I1122 00:34:24.442404  293341 cli_runner.go:164] Run: docker run --rm --name calico-239758-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-239758 --entrypoint /usr/bin/test -v calico-239758:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:34:24.929977  293341 oci.go:107] Successfully prepared a docker volume calico-239758
	I1122 00:34:24.930041  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:24.930049  293341 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:34:24.930271  293341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:34:28.268685  286707 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:28.287433  286707 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:34:28.287531  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:28.288517  286707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:28.307274  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:28.310254  286707 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:28.310273  286707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:34:28.310340  286707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-239758
	I1122 00:34:28.332628  286707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/kindnet-239758/id_rsa Username:docker}
	I1122 00:34:28.408908  286707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:34:28.427568  286707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:34:28.599857  286707 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:28.599878  286707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:34:29.419500  286707 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1122 00:34:29.420857  286707 node_ready.go:35] waiting up to 15m0s for node "kindnet-239758" to be "Ready" ...
	I1122 00:34:29.421226  286707 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:34:29.422558  286707 addons.go:530] duration metric: took 1.200539506s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:34:29.924087  286707 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-239758" context rescaled to 1 replicas
	W1122 00:34:26.922252  280462 pod_ready.go:104] pod "coredns-66bc5c9577-np5nq" is not "Ready", error: <nil>
	I1122 00:34:28.924510  280462 pod_ready.go:94] pod "coredns-66bc5c9577-np5nq" is "Ready"
	I1122 00:34:28.924547  280462 pod_ready.go:86] duration metric: took 33.008132108s for pod "coredns-66bc5c9577-np5nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.927643  280462 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.931333  280462 pod_ready.go:94] pod "etcd-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:28.931356  280462 pod_ready.go:86] duration metric: took 3.689183ms for pod "etcd-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.933253  280462 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.936739  280462 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:28.936759  280462 pod_ready.go:86] duration metric: took 3.488038ms for pod "kube-apiserver-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:28.938479  280462 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.119994  280462 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:29.120020  280462 pod_ready.go:86] duration metric: took 181.521421ms for pod "kube-controller-manager-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.319674  280462 pod_ready.go:83] waiting for pod "kube-proxy-jdzcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.719578  280462 pod_ready.go:94] pod "kube-proxy-jdzcl" is "Ready"
	I1122 00:34:29.719607  280462 pod_ready.go:86] duration metric: took 399.906376ms for pod "kube-proxy-jdzcl" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:29.919871  280462 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:30.319861  280462 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-046175" is "Ready"
	I1122 00:34:30.319884  280462 pod_ready.go:86] duration metric: took 399.990303ms for pod "kube-scheduler-default-k8s-diff-port-046175" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:30.319896  280462 pod_ready.go:40] duration metric: took 34.407818682s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:30.364640  280462 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:30.366185  280462 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-046175" cluster and "default" namespace by default
	W1122 00:34:29.066769  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	W1122 00:34:31.066997  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	W1122 00:34:33.067118  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:29.402011  293341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-239758:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.471697671s)
	I1122 00:34:29.402050  293341 kic.go:203] duration metric: took 4.471995178s to extract preloaded images to volume ...
	W1122 00:34:29.402153  293341 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:34:29.402209  293341 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:34:29.402261  293341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:34:29.475735  293341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-239758 --name calico-239758 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-239758 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-239758 --network calico-239758 --ip 192.168.94.2 --volume calico-239758:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:34:29.803136  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Running}}
	I1122 00:34:29.822231  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:29.840074  293341 cli_runner.go:164] Run: docker exec calico-239758 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:34:29.882254  293341 oci.go:144] the created container "calico-239758" has a running status.
	I1122 00:34:29.882291  293341 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa...
	I1122 00:34:30.011074  293341 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:34:30.034955  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:30.057523  293341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:34:30.057554  293341 kic_runner.go:114] Args: [docker exec --privileged calico-239758 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:34:30.103556  293341 cli_runner.go:164] Run: docker container inspect calico-239758 --format={{.State.Status}}
	I1122 00:34:30.128715  293341 machine.go:94] provisionDockerMachine start ...
	I1122 00:34:30.128824  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:30.153454  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:30.153820  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:30.153838  293341 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:34:30.154638  293341 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34172->127.0.0.1:33113: read: connection reset by peer
	I1122 00:34:33.277873  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-239758
	
	I1122 00:34:33.277899  293341 ubuntu.go:182] provisioning hostname "calico-239758"
	I1122 00:34:33.278221  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.297630  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.297838  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.297851  293341 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-239758 && echo "calico-239758" | sudo tee /etc/hostname
	I1122 00:34:33.425234  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-239758
	
	I1122 00:34:33.425309  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.443338  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.443570  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.443594  293341 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-239758' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-239758/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-239758' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:34:33.561945  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:34:33.561974  293341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9122/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9122/.minikube}
	I1122 00:34:33.561996  293341 ubuntu.go:190] setting up certificates
	I1122 00:34:33.562006  293341 provision.go:84] configureAuth start
	I1122 00:34:33.562074  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:33.579857  293341 provision.go:143] copyHostCerts
	I1122 00:34:33.579915  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem, removing ...
	I1122 00:34:33.579925  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem
	I1122 00:34:33.580005  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/ca.pem (1078 bytes)
	I1122 00:34:33.580146  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem, removing ...
	I1122 00:34:33.580159  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem
	I1122 00:34:33.580204  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/cert.pem (1123 bytes)
	I1122 00:34:33.580309  293341 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem, removing ...
	I1122 00:34:33.580319  293341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem
	I1122 00:34:33.580355  293341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9122/.minikube/key.pem (1675 bytes)
	I1122 00:34:33.580443  293341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem org=jenkins.calico-239758 san=[127.0.0.1 192.168.94.2 calico-239758 localhost minikube]
	I1122 00:34:33.612809  293341 provision.go:177] copyRemoteCerts
	I1122 00:34:33.612853  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:34:33.612886  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.630182  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:33.718644  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:34:33.737188  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:34:33.753390  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:34:33.769831  293341 provision.go:87] duration metric: took 207.815394ms to configureAuth
	I1122 00:34:33.769852  293341 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:34:33.770010  293341 config.go:182] Loaded profile config "calico-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:34:33.770140  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:33.788770  293341 main.go:143] libmachine: Using SSH client type: native
	I1122 00:34:33.788960  293341 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1122 00:34:33.788976  293341 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	W1122 00:34:31.423669  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:33.424513  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	I1122 00:34:34.049378  293341 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:34:34.049418  293341 machine.go:97] duration metric: took 3.920680133s to provisionDockerMachine
	I1122 00:34:34.049432  293341 client.go:176] duration metric: took 9.819735436s to LocalClient.Create
	I1122 00:34:34.049458  293341 start.go:167] duration metric: took 9.81980132s to libmachine.API.Create "calico-239758"
	I1122 00:34:34.049469  293341 start.go:293] postStartSetup for "calico-239758" (driver="docker")
	I1122 00:34:34.049487  293341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:34:34.049572  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:34:34.049631  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.068260  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.159774  293341 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:34:34.163171  293341 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:34:34.163195  293341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:34:34.163204  293341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/addons for local assets ...
	I1122 00:34:34.163255  293341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9122/.minikube/files for local assets ...
	I1122 00:34:34.163340  293341 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem -> 145852.pem in /etc/ssl/certs
	I1122 00:34:34.163431  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:34:34.170727  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:34.192001  293341 start.go:296] duration metric: took 142.50085ms for postStartSetup
	I1122 00:34:34.192444  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:34.212037  293341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/config.json ...
	I1122 00:34:34.212352  293341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:34:34.212415  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.232097  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.318858  293341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:34:34.323290  293341 start.go:128] duration metric: took 10.095316664s to createHost
	I1122 00:34:34.323314  293341 start.go:83] releasing machines lock for "calico-239758", held for 10.095497901s
	I1122 00:34:34.323385  293341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-239758
	I1122 00:34:34.341009  293341 ssh_runner.go:195] Run: cat /version.json
	I1122 00:34:34.341082  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.341129  293341 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:34:34.341224  293341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-239758
	I1122 00:34:34.359899  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.360216  293341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/calico-239758/id_rsa Username:docker}
	I1122 00:34:34.498553  293341 ssh_runner.go:195] Run: systemctl --version
	I1122 00:34:34.504767  293341 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:34:34.538134  293341 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:34:34.542736  293341 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:34:34.542796  293341 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:34:34.568405  293341 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:34:34.568428  293341 start.go:496] detecting cgroup driver to use...
	I1122 00:34:34.568459  293341 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:34:34.568511  293341 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:34:34.583711  293341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:34:34.595905  293341 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:34:34.595949  293341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:34:34.611386  293341 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:34:34.628586  293341 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:34:34.708629  293341 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:34:34.791696  293341 docker.go:234] disabling docker service ...
	I1122 00:34:34.791754  293341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:34:34.809790  293341 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:34:34.822252  293341 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:34:34.903170  293341 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:34:34.984772  293341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:34:34.996404  293341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:34:35.010309  293341 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:34:35.010356  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.020022  293341 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1122 00:34:35.020090  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.028457  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.036764  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.044839  293341 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:34:35.052729  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.060762  293341 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.073830  293341 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:34:35.081844  293341 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:34:35.088699  293341 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:34:35.095439  293341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:35.174703  293341 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:34:35.322843  293341 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:34:35.322913  293341 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:34:35.327415  293341 start.go:564] Will wait 60s for crictl version
	I1122 00:34:35.327483  293341 ssh_runner.go:195] Run: which crictl
	I1122 00:34:35.331606  293341 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:34:35.358695  293341 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:34:35.358768  293341 ssh_runner.go:195] Run: crio --version
	I1122 00:34:35.385499  293341 ssh_runner.go:195] Run: crio --version
	I1122 00:34:35.414828  293341 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:34:35.415835  293341 cli_runner.go:164] Run: docker network inspect calico-239758 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:34:35.433625  293341 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:34:35.437798  293341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:35.447876  293341 kubeadm.go:884] updating cluster {Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:34:35.447990  293341 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:34:35.448035  293341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:35.477983  293341 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:35.478001  293341 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:34:35.478039  293341 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:34:35.503124  293341 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:34:35.503152  293341 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:34:35.503161  293341 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1122 00:34:35.503280  293341 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-239758 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1122 00:34:35.503364  293341 ssh_runner.go:195] Run: crio config
	I1122 00:34:35.549133  293341 cni.go:84] Creating CNI manager for "calico"
	I1122 00:34:35.549162  293341 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:34:35.549182  293341 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-239758 NodeName:calico-239758 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:34:35.549320  293341 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-239758"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:34:35.549381  293341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:34:35.557192  293341 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:34:35.557259  293341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:34:35.564633  293341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:34:35.577920  293341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:34:35.594673  293341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1122 00:34:35.606786  293341 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:34:35.610092  293341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:34:35.619306  293341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:34:35.726930  293341 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:34:35.754442  293341 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758 for IP: 192.168.94.2
	I1122 00:34:35.754461  293341 certs.go:195] generating shared ca certs ...
	I1122 00:34:35.754477  293341 certs.go:227] acquiring lock for ca certs: {Name:mk04e97b22fe69e444baefaaf0d53a0afe979470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.754616  293341 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key
	I1122 00:34:35.754681  293341 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key
	I1122 00:34:35.754701  293341 certs.go:257] generating profile certs ...
	I1122 00:34:35.754757  293341 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key
	I1122 00:34:35.754770  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt with IP's: []
	I1122 00:34:35.886588  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt ...
	I1122 00:34:35.886617  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.crt: {Name:mk80e4b50b13640dbfceb4aa8fb1a864e3e757e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.886834  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key ...
	I1122 00:34:35.886858  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/client.key: {Name:mkdf0ff1ca86a4b6ec7b3c7adc9b549b600dc7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.886990  293341 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359
	I1122 00:34:35.887021  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:34:35.993993  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 ...
	I1122 00:34:35.994013  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359: {Name:mk0bde71006e042423914c1492c118b912220f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.994149  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359 ...
	I1122 00:34:35.994169  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359: {Name:mk05575741a3c7f4a59cea7e3dc3511ae6d16893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:35.994241  293341 certs.go:382] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt.c33b3359 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt
	I1122 00:34:35.994338  293341 certs.go:386] copying /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key.c33b3359 -> /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key
	I1122 00:34:35.994400  293341 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key
	I1122 00:34:35.994415  293341 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt with IP's: []
	I1122 00:34:36.082678  293341 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt ...
	I1122 00:34:36.082700  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt: {Name:mk86a3c8d72154b79102df46e6429c52c7f40731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:36.082821  293341 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key ...
	I1122 00:34:36.082833  293341 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key: {Name:mka473f85754bb562c81cf79cb8010217c954ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:34:36.083005  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem (1338 bytes)
	W1122 00:34:36.083075  293341 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585_empty.pem, impossibly tiny 0 bytes
	I1122 00:34:36.083089  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:34:36.083124  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:34:36.083148  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:34:36.083171  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/certs/key.pem (1675 bytes)
	I1122 00:34:36.083224  293341 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem (1708 bytes)
	I1122 00:34:36.083788  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:34:36.101590  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:34:36.118269  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:34:36.134848  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:34:36.153038  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:34:36.169210  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:34:36.185465  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:34:36.201634  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/calico-239758/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:34:36.218111  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/ssl/certs/145852.pem --> /usr/share/ca-certificates/145852.pem (1708 bytes)
	I1122 00:34:36.235963  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:34:36.252698  293341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9122/.minikube/certs/14585.pem --> /usr/share/ca-certificates/14585.pem (1338 bytes)
	I1122 00:34:36.270282  293341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:34:36.282104  293341 ssh_runner.go:195] Run: openssl version
	I1122 00:34:36.287743  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14585.pem && ln -fs /usr/share/ca-certificates/14585.pem /etc/ssl/certs/14585.pem"
	I1122 00:34:36.295537  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.298840  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.298889  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14585.pem
	I1122 00:34:36.332698  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14585.pem /etc/ssl/certs/51391683.0"
	I1122 00:34:36.340811  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145852.pem && ln -fs /usr/share/ca-certificates/145852.pem /etc/ssl/certs/145852.pem"
	I1122 00:34:36.348629  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.351912  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.351953  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145852.pem
	I1122 00:34:36.387926  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145852.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:34:36.395767  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:34:36.403503  293341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.407040  293341 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:46 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.407093  293341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:34:36.441785  293341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:34:36.449576  293341 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:34:36.452741  293341 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:34:36.452803  293341 kubeadm.go:401] StartCluster: {Name:calico-239758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-239758 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:34:36.452900  293341 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:34:36.452967  293341 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:34:36.478379  293341 cri.go:89] found id: ""
	I1122 00:34:36.478436  293341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:34:36.485826  293341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:34:36.493286  293341 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:34:36.493337  293341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:34:36.500466  293341 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:34:36.500482  293341 kubeadm.go:158] found existing configuration files:
	
	I1122 00:34:36.500511  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:34:36.507594  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:34:36.507631  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:34:36.514397  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:34:36.521886  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:34:36.521927  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:34:36.528989  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:34:36.536061  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:34:36.536108  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:34:36.542926  293341 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:34:36.550313  293341 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:34:36.550359  293341 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:34:36.557152  293341 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:34:36.595862  293341 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:34:36.595928  293341 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:34:36.635422  293341 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:34:36.635525  293341 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:34:36.635575  293341 kubeadm.go:319] OS: Linux
	I1122 00:34:36.635636  293341 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:34:36.635696  293341 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:34:36.635755  293341 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:34:36.635816  293341 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:34:36.635876  293341 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:34:36.635939  293341 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:34:36.635999  293341 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:34:36.636085  293341 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:34:36.700817  293341 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:34:36.700979  293341 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:34:36.701167  293341 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:34:36.707734  293341 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1122 00:34:35.067941  284750 node_ready.go:57] node "auto-239758" has "Ready":"False" status (will retry)
	I1122 00:34:35.566670  284750 node_ready.go:49] node "auto-239758" is "Ready"
	I1122 00:34:35.566700  284750 node_ready.go:38] duration metric: took 11.002603065s for node "auto-239758" to be "Ready" ...
	I1122 00:34:35.566717  284750 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:34:35.566765  284750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:34:35.579021  284750 api_server.go:72] duration metric: took 11.344274813s to wait for apiserver process to appear ...
	I1122 00:34:35.579047  284750 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:34:35.579081  284750 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:34:35.583653  284750 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:34:35.584652  284750 api_server.go:141] control plane version: v1.34.1
	I1122 00:34:35.584682  284750 api_server.go:131] duration metric: took 5.617194ms to wait for apiserver health ...
	I1122 00:34:35.584693  284750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:34:35.587721  284750 system_pods.go:59] 8 kube-system pods found
	I1122 00:34:35.587755  284750 system_pods.go:61] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.587764  284750 system_pods.go:61] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.587772  284750 system_pods.go:61] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.587778  284750 system_pods.go:61] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.587784  284750 system_pods.go:61] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.587794  284750 system_pods.go:61] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.587801  284750 system_pods.go:61] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.587818  284750 system_pods.go:61] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.587830  284750 system_pods.go:74] duration metric: took 3.128863ms to wait for pod list to return data ...
	I1122 00:34:35.587843  284750 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:34:35.589977  284750 default_sa.go:45] found service account: "default"
	I1122 00:34:35.589998  284750 default_sa.go:55] duration metric: took 2.145397ms for default service account to be created ...
	I1122 00:34:35.590008  284750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:34:35.592483  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:35.592516  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.592525  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.592533  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.592540  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.592547  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.592556  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.592562  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.592573  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.592605  284750 retry.go:31] will retry after 212.155507ms: missing components: kube-dns
	I1122 00:34:35.808598  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:35.808633  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:35.808642  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:35.808650  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:35.808655  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:35.808660  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:35.808666  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:35.808672  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:35.808687  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:35.808722  284750 retry.go:31] will retry after 316.247782ms: missing components: kube-dns
	I1122 00:34:36.128295  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:36.128322  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:36.128328  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:36.128334  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:36.128337  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:36.128342  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:36.128352  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:36.128355  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:36.128360  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:36.128381  284750 retry.go:31] will retry after 480.759917ms: missing components: kube-dns
	I1122 00:34:36.613336  284750 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:36.613384  284750 system_pods.go:89] "coredns-66bc5c9577-hlldw" [b352ddb8-d26f-48ef-852e-60f82fa7a043] Running
	I1122 00:34:36.613394  284750 system_pods.go:89] "etcd-auto-239758" [b8674b47-653a-427b-b20a-03f58323c835] Running
	I1122 00:34:36.613399  284750 system_pods.go:89] "kindnet-5hwz7" [e454d6a7-3b55-42ff-9c37-87fe9033f46e] Running
	I1122 00:34:36.613404  284750 system_pods.go:89] "kube-apiserver-auto-239758" [f179e2f0-d972-45d4-b549-8a4e482a6ee2] Running
	I1122 00:34:36.613409  284750 system_pods.go:89] "kube-controller-manager-auto-239758" [d31dde27-d5ef-4741-a551-7ef1e1f22d22] Running
	I1122 00:34:36.613414  284750 system_pods.go:89] "kube-proxy-ttj9r" [38725e68-13ff-4bed-b490-1abb73d377e4] Running
	I1122 00:34:36.613429  284750 system_pods.go:89] "kube-scheduler-auto-239758" [832f6857-40a5-43bb-9233-f7d06766bfc5] Running
	I1122 00:34:36.613433  284750 system_pods.go:89] "storage-provisioner" [322c5b60-c9ad-484b-a580-e9f94ac6bd8b] Running
	I1122 00:34:36.613445  284750 system_pods.go:126] duration metric: took 1.023428657s to wait for k8s-apps to be running ...
	I1122 00:34:36.613459  284750 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:34:36.613511  284750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:36.629356  284750 system_svc.go:56] duration metric: took 15.886879ms WaitForService to wait for kubelet
	I1122 00:34:36.629385  284750 kubeadm.go:587] duration metric: took 12.394643418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:36.629417  284750 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:34:36.631913  284750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:34:36.631945  284750 node_conditions.go:123] node cpu capacity is 8
	I1122 00:34:36.631966  284750 node_conditions.go:105] duration metric: took 2.54234ms to run NodePressure ...
	I1122 00:34:36.631982  284750 start.go:242] waiting for startup goroutines ...
	I1122 00:34:36.631996  284750 start.go:247] waiting for cluster config update ...
	I1122 00:34:36.632045  284750 start.go:256] writing updated cluster config ...
	I1122 00:34:36.632401  284750 ssh_runner.go:195] Run: rm -f paused
	I1122 00:34:36.637690  284750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:36.644921  284750 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hlldw" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.648633  284750 pod_ready.go:94] pod "coredns-66bc5c9577-hlldw" is "Ready"
	I1122 00:34:36.648653  284750 pod_ready.go:86] duration metric: took 3.712061ms for pod "coredns-66bc5c9577-hlldw" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.650732  284750 pod_ready.go:83] waiting for pod "etcd-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.654884  284750 pod_ready.go:94] pod "etcd-auto-239758" is "Ready"
	I1122 00:34:36.654906  284750 pod_ready.go:86] duration metric: took 4.153267ms for pod "etcd-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.656957  284750 pod_ready.go:83] waiting for pod "kube-apiserver-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.660945  284750 pod_ready.go:94] pod "kube-apiserver-auto-239758" is "Ready"
	I1122 00:34:36.660966  284750 pod_ready.go:86] duration metric: took 3.989133ms for pod "kube-apiserver-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:36.662739  284750 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.042137  284750 pod_ready.go:94] pod "kube-controller-manager-auto-239758" is "Ready"
	I1122 00:34:37.042169  284750 pod_ready.go:86] duration metric: took 379.411313ms for pod "kube-controller-manager-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.241844  284750 pod_ready.go:83] waiting for pod "kube-proxy-ttj9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.641584  284750 pod_ready.go:94] pod "kube-proxy-ttj9r" is "Ready"
	I1122 00:34:37.641609  284750 pod_ready.go:86] duration metric: took 399.739742ms for pod "kube-proxy-ttj9r" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:37.841579  284750 pod_ready.go:83] waiting for pod "kube-scheduler-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:38.241976  284750 pod_ready.go:94] pod "kube-scheduler-auto-239758" is "Ready"
	I1122 00:34:38.242003  284750 pod_ready.go:86] duration metric: took 400.400799ms for pod "kube-scheduler-auto-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:38.242016  284750 pod_ready.go:40] duration metric: took 1.604299507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:38.290285  284750 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:38.292160  284750 out.go:179] * Done! kubectl is now configured to use "auto-239758" cluster and "default" namespace by default
	I1122 00:34:36.710354  293341 out.go:252]   - Generating certificates and keys ...
	I1122 00:34:36.710444  293341 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:34:36.710541  293341 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:34:37.440660  293341 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:34:37.653686  293341 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:34:37.775780  293341 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:34:37.989687  293341 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:34:38.225366  293341 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:34:38.225528  293341 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-239758 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:34:38.380447  293341 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:34:38.380700  293341 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-239758 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1122 00:34:38.602224  293341 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:34:38.695190  293341 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1122 00:34:35.923399  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:37.924356  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	W1122 00:34:39.924626  286707 node_ready.go:57] node "kindnet-239758" has "Ready":"False" status (will retry)
	I1122 00:34:39.366011  293341 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:34:39.366204  293341 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:34:39.425435  293341 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:34:40.102322  293341 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:34:40.443471  293341 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:34:41.377319  293341 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:34:41.601945  293341 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:34:41.603321  293341 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:34:41.608221  293341 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:34:40.424419  286707 node_ready.go:49] node "kindnet-239758" is "Ready"
	I1122 00:34:40.424450  286707 node_ready.go:38] duration metric: took 11.003562111s for node "kindnet-239758" to be "Ready" ...
	I1122 00:34:40.424469  286707 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:34:40.424541  286707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:34:40.440607  286707 api_server.go:72] duration metric: took 12.218636595s to wait for apiserver process to appear ...
	I1122 00:34:40.440638  286707 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:34:40.440683  286707 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:34:40.446081  286707 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:34:40.447220  286707 api_server.go:141] control plane version: v1.34.1
	I1122 00:34:40.447250  286707 api_server.go:131] duration metric: took 6.583507ms to wait for apiserver health ...
	I1122 00:34:40.447262  286707 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:34:40.450923  286707 system_pods.go:59] 8 kube-system pods found
	I1122 00:34:40.450957  286707 system_pods.go:61] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.450964  286707 system_pods.go:61] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.450972  286707 system_pods.go:61] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.450977  286707 system_pods.go:61] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.450983  286707 system_pods.go:61] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.450988  286707 system_pods.go:61] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.450993  286707 system_pods.go:61] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.451003  286707 system_pods.go:61] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.451010  286707 system_pods.go:74] duration metric: took 3.742031ms to wait for pod list to return data ...
	I1122 00:34:40.451028  286707 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:34:40.453677  286707 default_sa.go:45] found service account: "default"
	I1122 00:34:40.453699  286707 default_sa.go:55] duration metric: took 2.660289ms for default service account to be created ...
	I1122 00:34:40.453709  286707 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:34:40.457123  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.457152  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.457166  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.457174  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.457179  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.457193  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.457198  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.457203  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.457210  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.457237  286707 retry.go:31] will retry after 213.581768ms: missing components: kube-dns
	I1122 00:34:40.675160  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.675191  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.675203  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.675210  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.675213  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.675216  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.675219  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.675222  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.675227  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.675241  286707 retry.go:31] will retry after 234.94544ms: missing components: kube-dns
	I1122 00:34:40.914103  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:40.914146  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:40.914154  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:40.914162  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:40.914168  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:40.914173  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:40.914181  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:40.914186  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:40.914197  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:40.914215  286707 retry.go:31] will retry after 338.264832ms: missing components: kube-dns
	I1122 00:34:41.256177  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:41.256208  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:34:41.256214  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:41.256224  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:41.256231  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:41.256235  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:41.256239  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:41.256250  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:41.256258  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:34:41.256281  286707 retry.go:31] will retry after 464.101326ms: missing components: kube-dns
	I1122 00:34:41.724785  286707 system_pods.go:86] 8 kube-system pods found
	I1122 00:34:41.724810  286707 system_pods.go:89] "coredns-66bc5c9577-5n5ck" [3c111b03-1f8c-4a15-b3c2-192bd9f7b2bf] Running
	I1122 00:34:41.724816  286707 system_pods.go:89] "etcd-kindnet-239758" [4c6d84a5-79d9-424f-bee9-83064aad8af6] Running
	I1122 00:34:41.724820  286707 system_pods.go:89] "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
	I1122 00:34:41.724823  286707 system_pods.go:89] "kube-apiserver-kindnet-239758" [a7638db1-f1c6-4c6f-abb5-87b1e0cd04dd] Running
	I1122 00:34:41.724833  286707 system_pods.go:89] "kube-controller-manager-kindnet-239758" [b9366423-31e9-49bc-970a-3997e4033b32] Running
	I1122 00:34:41.724837  286707 system_pods.go:89] "kube-proxy-5k9bx" [b1cf1a76-308a-40d2-9e28-11aecc61b32b] Running
	I1122 00:34:41.724843  286707 system_pods.go:89] "kube-scheduler-kindnet-239758" [90616e41-452c-433e-96ae-05f4e02e0a50] Running
	I1122 00:34:41.724846  286707 system_pods.go:89] "storage-provisioner" [1b70bb95-d065-418b-b0dd-111442a1c25f] Running
	I1122 00:34:41.724855  286707 system_pods.go:126] duration metric: took 1.27114076s to wait for k8s-apps to be running ...
	I1122 00:34:41.724866  286707 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:34:41.724904  286707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:34:41.739239  286707 system_svc.go:56] duration metric: took 14.363847ms WaitForService to wait for kubelet
	I1122 00:34:41.739269  286707 kubeadm.go:587] duration metric: took 13.517315904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:34:41.739292  286707 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:34:41.742308  286707 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:34:41.742341  286707 node_conditions.go:123] node cpu capacity is 8
	I1122 00:34:41.742362  286707 node_conditions.go:105] duration metric: took 3.063934ms to run NodePressure ...
	I1122 00:34:41.742378  286707 start.go:242] waiting for startup goroutines ...
	I1122 00:34:41.742387  286707 start.go:247] waiting for cluster config update ...
	I1122 00:34:41.742402  286707 start.go:256] writing updated cluster config ...
	I1122 00:34:41.742717  286707 ssh_runner.go:195] Run: rm -f paused
	I1122 00:34:41.746785  286707 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:41.750779  286707 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5n5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.755881  286707 pod_ready.go:94] pod "coredns-66bc5c9577-5n5ck" is "Ready"
	I1122 00:34:41.755905  286707 pod_ready.go:86] duration metric: took 5.102628ms for pod "coredns-66bc5c9577-5n5ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.758038  286707 pod_ready.go:83] waiting for pod "etcd-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.762310  286707 pod_ready.go:94] pod "etcd-kindnet-239758" is "Ready"
	I1122 00:34:41.762331  286707 pod_ready.go:86] duration metric: took 4.246089ms for pod "etcd-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.764042  286707 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.768498  286707 pod_ready.go:94] pod "kube-apiserver-kindnet-239758" is "Ready"
	I1122 00:34:41.768516  286707 pod_ready.go:86] duration metric: took 4.438824ms for pod "kube-apiserver-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:41.771804  286707 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.150993  286707 pod_ready.go:94] pod "kube-controller-manager-kindnet-239758" is "Ready"
	I1122 00:34:42.151028  286707 pod_ready.go:86] duration metric: took 379.198583ms for pod "kube-controller-manager-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.352340  286707 pod_ready.go:83] waiting for pod "kube-proxy-5k9bx" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.751466  286707 pod_ready.go:94] pod "kube-proxy-5k9bx" is "Ready"
	I1122 00:34:42.751497  286707 pod_ready.go:86] duration metric: took 399.128098ms for pod "kube-proxy-5k9bx" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:42.951897  286707 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:43.351409  286707 pod_ready.go:94] pod "kube-scheduler-kindnet-239758" is "Ready"
	I1122 00:34:43.351485  286707 pod_ready.go:86] duration metric: took 399.555519ms for pod "kube-scheduler-kindnet-239758" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:34:43.351520  286707 pod_ready.go:40] duration metric: took 1.604703325s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:34:43.410915  286707 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:34:43.412496  286707 out.go:179] * Done! kubectl is now configured to use "kindnet-239758" cluster and "default" namespace by default
	I1122 00:34:41.609750  293341 out.go:252]   - Booting up control plane ...
	I1122 00:34:41.609968  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:34:41.610096  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:34:41.610670  293341 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:34:41.626130  293341 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:34:41.626257  293341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:34:41.632958  293341 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:34:41.633179  293341 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:34:41.633223  293341 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:34:41.731919  293341 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:34:41.732105  293341 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:34:43.233251  293341 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501329873s
	I1122 00:34:43.235879  293341 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:34:43.235997  293341 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1122 00:34:43.236113  293341 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:34:43.236211  293341 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 22 00:34:10 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:10.192476757Z" level=info msg="Started container" PID=1750 containerID=140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper id=474e8dd0-1bf4-4608-a140-7e5efbd8a3b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab47096307824e8a49d25f1f5a8eb219fb566d254ada11368edfffe29e2ffe0a
	Nov 22 00:34:11 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:11.146647175Z" level=info msg="Removing container: ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b" id=31adfd02-58c2-4bf0-bea4-c79bc08c4fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:11 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:11.156601505Z" level=info msg="Removed container ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=31adfd02-58c2-4bf0-bea4-c79bc08c4fc6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.188715445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eaa3b52f-901a-41ef-83b5-8d55e4d97f58 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.189721885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=62277b35-c742-450e-ab16-d19ee7449571 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.190837876Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=edd7088c-6ffd-4e60-9f40-9f97b51c82b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.19096898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195676026Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195866367Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9181f580e500ae9bb731d0e316ba45a88c9ec0518242cd824456bbf78b6e8fce/merged/etc/passwd: no such file or directory"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.195905406Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9181f580e500ae9bb731d0e316ba45a88c9ec0518242cd824456bbf78b6e8fce/merged/etc/group: no such file or directory"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.196214818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.236244835Z" level=info msg="Created container 08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5: kube-system/storage-provisioner/storage-provisioner" id=edd7088c-6ffd-4e60-9f40-9f97b51c82b8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.236855049Z" level=info msg="Starting container: 08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5" id=e7ac1253-908b-4bdd-a295-856e54a5033d name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:26 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:26.23887148Z" level=info msg="Started container" PID=1764 containerID=08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5 description=kube-system/storage-provisioner/storage-provisioner id=e7ac1253-908b-4bdd-a295-856e54a5033d name=/runtime.v1.RuntimeService/StartContainer sandboxID=60cd7259cfcbecd267e86ad47a79f4ac693579e1fe824abcf8f3dfce50edca9f
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.059976401Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=eed000c3-0321-4a7a-aa04-400a4bd61bc2 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.063044121Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1ed6fb4c-1db6-4e66-8db6-5b84bf9b6d82 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.064043247Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=395d8622-7112-454a-9247-5b1a73c33e43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.064202829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.070727586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.071293727Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.103330012Z" level=info msg="Created container 945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=395d8622-7112-454a-9247-5b1a73c33e43 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.103810963Z" level=info msg="Starting container: 945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6" id=8a35759d-c490-4a85-8cc1-98e788540d19 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.105636975Z" level=info msg="Started container" PID=1799 containerID=945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper id=8a35759d-c490-4a85-8cc1-98e788540d19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab47096307824e8a49d25f1f5a8eb219fb566d254ada11368edfffe29e2ffe0a
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.209962183Z" level=info msg="Removing container: 140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca" id=2f22353e-73dd-47de-ac1c-6e695f5dfbbe name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:34:33 default-k8s-diff-port-046175 crio[570]: time="2025-11-22T00:34:33.218859032Z" level=info msg="Removed container 140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t/dashboard-metrics-scraper" id=2f22353e-73dd-47de-ac1c-6e695f5dfbbe name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	945b05c05cd00       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago      Exited              dashboard-metrics-scraper   2                   ab47096307824       dashboard-metrics-scraper-6ffb444bf9-ljm4t             kubernetes-dashboard
	08df4c4e3b8f7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   60cd7259cfcbe       storage-provisioner                                    kube-system
	e17e5c680ad7a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   e5cd0086b3132       kubernetes-dashboard-855c9754f9-jqktd                  kubernetes-dashboard
	4b12872c6fa61       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   395a10ea1877b       coredns-66bc5c9577-np5nq                               kube-system
	033fbd2f51fa0       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   1daed0982a61d       busybox                                                default
	396b80189fa57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   60cd7259cfcbe       storage-provisioner                                    kube-system
	2293f34669dda       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   57ca72b7429be       kindnet-nqk28                                          kube-system
	5b02c4f39f6de       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   be9c9ca5ae8e8       kube-proxy-jdzcl                                       kube-system
	1371a5a17f4e2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   57a696312afe7       kube-scheduler-default-k8s-diff-port-046175            kube-system
	baff64b8980c8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   fed6a5cb5b9ac       kube-apiserver-default-k8s-diff-port-046175            kube-system
	cb4effdd05eb9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   2d7363f9b64f0       etcd-default-k8s-diff-port-046175                      kube-system
	c9323b87a3cb9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   49a6381d6f23c       kube-controller-manager-default-k8s-diff-port-046175   kube-system
	
	
	==> coredns [4b12872c6fa61b798322e32c38f1859a68931ec051534300af7de32a14ecbb1e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46370 - 45686 "HINFO IN 1010609819208698072.3699696803867637871. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.108359996s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-046175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-046175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-046175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_33_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:32:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-046175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:34:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:32:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:34:25 +0000   Sat, 22 Nov 2025 00:33:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-046175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                cef49250-3102-457d-90bd-87a6df160389
	  Boot ID:                    8e6acb0e-95c3-406d-a4d5-d86e16610b0f
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-np5nq                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-default-k8s-diff-port-046175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-nqk28                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-046175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-046175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-jdzcl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-046175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ljm4t              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jqktd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node default-k8s-diff-port-046175 event: Registered Node default-k8s-diff-port-046175 in Controller
	  Normal  NodeReady                91s                  kubelet          Node default-k8s-diff-port-046175 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node default-k8s-diff-port-046175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node default-k8s-diff-port-046175 event: Registered Node default-k8s-diff-port-046175 in Controller
	
	
	==> dmesg <==
	[  +0.082960] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023673] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.276450] kauditd_printk_skb: 47 callbacks suppressed
	[Nov21 23:48] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.062274] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023896] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023879] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +2.047773] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[  +4.031577] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[Nov21 23:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +16.383304] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	[ +32.252695] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ae 8a 9e 6e 40 97 8e b9 d4 bf cc 6b 08 00
	
	
	==> etcd [cb4effdd05eb9c31be1bd5e532b9906269e3438992f9777dc396eb3006f69f34] <==
	{"level":"warn","ts":"2025-11-22T00:33:59.332331Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-22T00:33:59.014838Z","time spent":"317.457542ms","remote":"127.0.0.1:55038","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4725,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" mod_revision:525 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" value_size:4649 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" > >"}
	{"level":"info","ts":"2025-11-22T00:33:59.533909Z","caller":"traceutil/trace.go:172","msg":"trace[1664692026] linearizableReadLoop","detail":"{readStateIndex:576; appliedIndex:576; }","duration":"144.231686ms","start":"2025-11-22T00:33:59.389654Z","end":"2025-11-22T00:33:59.533886Z","steps":["trace[1664692026] 'read index received'  (duration: 144.223897ms)","trace[1664692026] 'applied index is now lower than readState.Index'  (duration: 6.5µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.648936Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"259.252845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t\" limit:1 ","response":"range_response_count:1 size:2792"}
	{"level":"info","ts":"2025-11-22T00:33:59.649306Z","caller":"traceutil/trace.go:172","msg":"trace[1633187650] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t; range_end:; response_count:1; response_revision:545; }","duration":"259.63624ms","start":"2025-11-22T00:33:59.389649Z","end":"2025-11-22T00:33:59.649286Z","steps":["trace[1633187650] 'agreement among raft nodes before linearized reading'  (duration: 144.335651ms)","trace[1633187650] 'range keys from in-memory index tree'  (duration: 114.86853ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.649445Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.445067ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221124013874 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:542 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:4847 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:33:59.650067Z","caller":"traceutil/trace.go:172","msg":"trace[394589799] linearizableReadLoop","detail":"{readStateIndex:577; appliedIndex:576; }","duration":"116.073918ms","start":"2025-11-22T00:33:59.533962Z","end":"2025-11-22T00:33:59.650036Z","steps":["trace[394589799] 'read index received'  (duration: 29.208µs)","trace[394589799] 'applied index is now lower than readState.Index'  (duration: 116.043068ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.650274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.942899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-11-22T00:33:59.650308Z","caller":"traceutil/trace.go:172","msg":"trace[1320213160] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:546; }","duration":"184.984412ms","start":"2025-11-22T00:33:59.465315Z","end":"2025-11-22T00:33:59.650299Z","steps":["trace[1320213160] 'agreement among raft nodes before linearized reading'  (duration: 184.823539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:33:59.650365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"231.814036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-np5nq\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-22T00:33:59.650403Z","caller":"traceutil/trace.go:172","msg":"trace[818869516] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-np5nq; range_end:; response_count:1; response_revision:546; }","duration":"231.863623ms","start":"2025-11-22T00:33:59.418531Z","end":"2025-11-22T00:33:59.650395Z","steps":["trace[818869516] 'agreement among raft nodes before linearized reading'  (duration: 231.733953ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:33:59.650453Z","caller":"traceutil/trace.go:172","msg":"trace[14681837] transaction","detail":"{read_only:false; response_revision:546; number_of_response:1; }","duration":"282.408395ms","start":"2025-11-22T00:33:59.368026Z","end":"2025-11-22T00:33:59.650435Z","steps":["trace[14681837] 'process raft request'  (duration: 165.909065ms)","trace[14681837] 'compare'  (duration: 115.36055ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:33:59.650615Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.326015ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" limit:1 ","response":"range_response_count:1 size:957"}
	{"level":"info","ts":"2025-11-22T00:33:59.650652Z","caller":"traceutil/trace.go:172","msg":"trace[501672378] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:546; }","duration":"185.365839ms","start":"2025-11-22T00:33:59.465277Z","end":"2025-11-22T00:33:59.650643Z","steps":["trace[501672378] 'agreement among raft nodes before linearized reading'  (duration: 185.254681ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:34:05.133908Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.996988ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597221124013982 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" mod_revision:479 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3ev7gzm6szwznfxpz4rb57chxa\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:34:05.134090Z","caller":"traceutil/trace.go:172","msg":"trace[51142223] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"141.905815ms","start":"2025-11-22T00:34:04.992165Z","end":"2025-11-22T00:34:05.134071Z","steps":["trace[51142223] 'process raft request'  (duration: 10.674159ms)","trace[51142223] 'compare'  (duration: 130.889113ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:06.063397Z","caller":"traceutil/trace.go:172","msg":"trace[1060462428] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"106.148564ms","start":"2025-11-22T00:34:05.957234Z","end":"2025-11-22T00:34:06.063383Z","steps":["trace[1060462428] 'process raft request'  (duration: 106.034297ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:27.300767Z","caller":"traceutil/trace.go:172","msg":"trace[157611854] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"101.112516ms","start":"2025-11-22T00:34:27.199638Z","end":"2025-11-22T00:34:27.300751Z","steps":["trace[157611854] 'process raft request'  (duration: 100.934094ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.527832Z","caller":"traceutil/trace.go:172","msg":"trace[1348268258] linearizableReadLoop","detail":"{readStateIndex:691; appliedIndex:691; }","duration":"109.274775ms","start":"2025-11-22T00:34:28.418532Z","end":"2025-11-22T00:34:28.527807Z","steps":["trace[1348268258] 'read index received'  (duration: 109.262365ms)","trace[1348268258] 'applied index is now lower than readState.Index'  (duration: 11.244µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:28.527975Z","caller":"traceutil/trace.go:172","msg":"trace[1277067968] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"132.105256ms","start":"2025-11-22T00:34:28.395855Z","end":"2025-11-22T00:34:28.527960Z","steps":["trace[1277067968] 'process raft request'  (duration: 131.959753ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:34:28.528081Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.501471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-np5nq\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-22T00:34:28.528150Z","caller":"traceutil/trace.go:172","msg":"trace[447318893] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-np5nq; range_end:; response_count:1; response_revision:654; }","duration":"109.61571ms","start":"2025-11-22T00:34:28.418521Z","end":"2025-11-22T00:34:28.528137Z","steps":["trace[447318893] 'agreement among raft nodes before linearized reading'  (duration: 109.361827ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.685179Z","caller":"traceutil/trace.go:172","msg":"trace[738216789] transaction","detail":"{read_only:false; response_revision:657; number_of_response:1; }","duration":"152.239389ms","start":"2025-11-22T00:34:28.532925Z","end":"2025-11-22T00:34:28.685165Z","steps":["trace[738216789] 'process raft request'  (duration: 152.199943ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.685227Z","caller":"traceutil/trace.go:172","msg":"trace[1750452361] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"152.281617ms","start":"2025-11-22T00:34:28.532923Z","end":"2025-11-22T00:34:28.685205Z","steps":["trace[1750452361] 'process raft request'  (duration: 105.100712ms)","trace[1750452361] 'compare'  (duration: 46.962336ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:34:28.692414Z","caller":"traceutil/trace.go:172","msg":"trace[1901711117] transaction","detail":"{read_only:false; response_revision:658; number_of_response:1; }","duration":"157.799685ms","start":"2025-11-22T00:34:28.534598Z","end":"2025-11-22T00:34:28.692398Z","steps":["trace[1901711117] 'process raft request'  (duration: 157.668707ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:34:28.883279Z","caller":"traceutil/trace.go:172","msg":"trace[1125045978] transaction","detail":"{read_only:false; response_revision:659; number_of_response:1; }","duration":"181.523907ms","start":"2025-11-22T00:34:28.701733Z","end":"2025-11-22T00:34:28.883256Z","steps":["trace[1125045978] 'process raft request'  (duration: 137.757615ms)","trace[1125045978] 'compare'  (duration: 43.642908ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:34:46 up  1:17,  0 user,  load average: 4.11, 3.35, 2.12
	Linux default-k8s-diff-port-046175 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2293f34669ddac65d895a603acb24bfc4d87bf04bf17a68f75a667e0f0386e29] <==
	I1122 00:33:55.592728       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:33:55.686139       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:33:55.686439       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:33:55.686465       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:33:55.686481       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:33:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:33:55.987779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:33:55.987814       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:33:55.987829       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:33:55.987979       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:33:56.288356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:33:56.288395       1 metrics.go:72] Registering metrics
	I1122 00:33:56.288524       1 controller.go:711] "Syncing nftables rules"
	I1122 00:34:05.897185       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:05.897275       1 main.go:301] handling current node
	I1122 00:34:15.897247       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:15.897291       1 main.go:301] handling current node
	I1122 00:34:25.897446       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:25.897499       1 main.go:301] handling current node
	I1122 00:34:35.897669       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:35.897699       1 main.go:301] handling current node
	I1122 00:34:45.900155       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:34:45.900203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [baff64b8980c8ff7dd1c7ba87a50d4ea1b4d0bc4551fdda3b346aed0dd0806fc] <==
	I1122 00:33:54.649026       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:33:54.647672       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:33:54.647684       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:33:54.647724       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:33:54.650797       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:33:54.657572       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:33:54.668377       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1122 00:33:54.674265       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:33:54.679536       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:33:54.679567       1 policy_source.go:240] refreshing policies
	I1122 00:33:54.722520       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:33:54.722585       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:33:54.735814       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:33:55.139323       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:33:55.146789       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:33:55.188684       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:33:55.213481       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:33:55.220918       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:33:55.273433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.63.65"}
	I1122 00:33:55.287717       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.224.254"}
	I1122 00:33:55.558771       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:33:58.393225       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:33:58.522402       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:33:58.523155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:33:58.523155       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c9323b87a3cb9e9f47608ebbfc01d685fde2c082c4217ffeafce458f5e9b9ead] <==
	I1122 00:33:57.966791       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:33:57.990107       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:33:57.990117       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:33:57.990226       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:33:57.990530       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:33:57.990559       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:33:57.991445       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:33:57.991540       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:33:57.991638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-046175"
	I1122 00:33:57.991682       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:33:57.994693       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:33:57.995853       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:33:57.995947       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:33:57.996009       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:33:57.996018       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:33:57.996025       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:33:57.996962       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:33:57.999229       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:33:58.001488       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:33:58.002641       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:33:58.003726       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:33:58.004907       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:33:58.007144       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:33:58.011330       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:33:58.015626       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5b02c4f39f6deaa78cd85e6b355b467c645ddb1564142788a9c2995c61b6f880] <==
	I1122 00:33:55.492792       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:33:55.569464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:33:55.670323       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:33:55.670442       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:33:55.670597       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:33:55.697883       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:33:55.697953       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:33:55.703766       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:33:55.704237       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:33:55.704268       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:55.708138       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:33:55.709810       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:33:55.708334       1 config.go:200] "Starting service config controller"
	I1122 00:33:55.709846       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:33:55.708361       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:33:55.709859       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:33:55.710953       1 config.go:309] "Starting node config controller"
	I1122 00:33:55.715597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:33:55.715950       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:33:55.809969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:33:55.810263       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:33:55.810284       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1371a5a17f4e24662cf2becd362174f92c814b7d7c998f6684dc3377977af331] <==
	I1122 00:33:53.395812       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:33:54.676279       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:33:54.677340       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:33:54.694229       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:33:54.694389       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:33:54.694417       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:33:54.694466       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:33:54.698047       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:54.698106       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:33:54.698129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.698137       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.794910       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:33:54.799149       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:33:54.799219       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:33:58 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:58.386176     736 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362570     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c7691f68-4748-403c-b999-decb49f55769-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-jqktd\" (UID: \"c7691f68-4748-403c-b999-decb49f55769\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362639     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhq49\" (UniqueName: \"kubernetes.io/projected/c7691f68-4748-403c-b999-decb49f55769-kube-api-access-lhq49\") pod \"kubernetes-dashboard-855c9754f9-jqktd\" (UID: \"c7691f68-4748-403c-b999-decb49f55769\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362667     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpsp\" (UniqueName: \"kubernetes.io/projected/81212010-4d9f-4af8-8e8b-c43717e014b7-kube-api-access-4gpsp\") pod \"dashboard-metrics-scraper-6ffb444bf9-ljm4t\" (UID: \"81212010-4d9f-4af8-8e8b-c43717e014b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t"
	Nov 22 00:33:59 default-k8s-diff-port-046175 kubelet[736]: I1122 00:33:59.362743     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/81212010-4d9f-4af8-8e8b-c43717e014b7-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ljm4t\" (UID: \"81212010-4d9f-4af8-8e8b-c43717e014b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t"
	Nov 22 00:34:09 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:09.795036     736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jqktd" podStartSLOduration=5.820636155 podStartE2EDuration="11.795013675s" podCreationTimestamp="2025-11-22 00:33:58 +0000 UTC" firstStartedPulling="2025-11-22 00:33:59.980251611 +0000 UTC m=+8.016771000" lastFinishedPulling="2025-11-22 00:34:05.954629139 +0000 UTC m=+13.991148520" observedRunningTime="2025-11-22 00:34:07.154610729 +0000 UTC m=+15.191130118" watchObservedRunningTime="2025-11-22 00:34:09.795013675 +0000 UTC m=+17.831533067"
	Nov 22 00:34:10 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:10.140886     736 scope.go:117] "RemoveContainer" containerID="ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:11.145245     736 scope.go:117] "RemoveContainer" containerID="ed8d6832442230c35937e0b726233a7376c70287f3e344300e8cc0d4439a4e8b"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:11.145614     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:11 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:11.145804     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:12 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:12.148214     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:12 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:12.148392     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:20 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:20.294359     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:20 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:20.294584     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:26 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:26.188299     736 scope.go:117] "RemoveContainer" containerID="396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.059475     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.208782     736 scope.go:117] "RemoveContainer" containerID="140851972c704d2b8933d765c5a0b469002ddd01eaccfb2bd77463c8bb21a8ca"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:33.208962     736 scope.go:117] "RemoveContainer" containerID="945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	Nov 22 00:34:33 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:33.209182     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:40 default-k8s-diff-port-046175 kubelet[736]: I1122 00:34:40.294472     736 scope.go:117] "RemoveContainer" containerID="945b05c05cd0090b92ce4fb575bdcf88d9a6d27815280f0820a63bfd406e62a6"
	Nov 22 00:34:40 default-k8s-diff-port-046175 kubelet[736]: E1122 00:34:40.294701     736 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ljm4t_kubernetes-dashboard(81212010-4d9f-4af8-8e8b-c43717e014b7)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ljm4t" podUID="81212010-4d9f-4af8-8e8b-c43717e014b7"
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 22 00:34:42 default-k8s-diff-port-046175 systemd[1]: kubelet.service: Consumed 1.592s CPU time.
	
	
	==> kubernetes-dashboard [e17e5c680ad7a142a3deec04ab3951d68eb8d7e36343494542d0ae2b4b532db6] <==
	2025/11/22 00:34:06 Starting overwatch
	2025/11/22 00:34:06 Using namespace: kubernetes-dashboard
	2025/11/22 00:34:06 Using in-cluster config to connect to apiserver
	2025/11/22 00:34:06 Using secret token for csrf signing
	2025/11/22 00:34:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:34:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:34:06 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:34:06 Generating JWE encryption key
	2025/11/22 00:34:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:34:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:34:06 Initializing JWE encryption key from synchronized object
	2025/11/22 00:34:06 Creating in-cluster Sidecar client
	2025/11/22 00:34:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:34:06 Serving insecurely on HTTP port: 9090
	2025/11/22 00:34:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [08df4c4e3b8f76c73937401a17c2bdb997ebb970dfdcbe313d7d1e8b59c95ad5] <==
	I1122 00:34:26.252326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:34:26.261377       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:34:26.261425       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:34:26.263440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:29.719346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:33.979394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:37.578015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:40.631132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.653668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.660828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:43.660992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:34:43.661171       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f83cc8c2-c96e-4e26-a62a-1dc1f2279333", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1 became leader
	I1122 00:34:43.661226       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1!
	W1122 00:34:43.667228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:43.670647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:34:43.761929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-046175_b805cbeb-2265-4a62-abc1-5c79cc010bb1!
	W1122 00:34:45.673393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:34:45.677240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [396b80189fa578113fc7607b685f5f11b40553d27cb2a04db7b505536915321a] <==
	I1122 00:33:55.452300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:34:25.456114       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175: exit status 2 (338.968332ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.61s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.74
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.8
22 TestOffline 57.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 120.69
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.41
48 TestAddons/StoppedEnableDisable 16.59
49 TestCertOptions 26.74
50 TestCertExpiration 215.22
52 TestForceSystemdFlag 25.42
53 TestForceSystemdEnv 39.52
58 TestErrorSpam/setup 22.17
59 TestErrorSpam/start 0.64
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 5.94
62 TestErrorSpam/unpause 5.58
63 TestErrorSpam/stop 12.54
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 70.3
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.76
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.59
75 TestFunctional/serial/CacheCmd/cache/add_local 1.11
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 39.8
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.13
86 TestFunctional/serial/LogsFileCmd 1.15
87 TestFunctional/serial/InvalidService 4.52
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 6.98
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.9
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 24.96
101 TestFunctional/parallel/SSHCmd 0.57
102 TestFunctional/parallel/CpCmd 1.73
103 TestFunctional/parallel/MySQL 17.01
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.74
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.48
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.44
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.04
121 TestFunctional/parallel/ImageCommands/Setup 0.98
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.19
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
145 TestFunctional/parallel/ProfileCmd/profile_list 0.38
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
147 TestFunctional/parallel/MountCmd/any-port 7.79
148 TestFunctional/parallel/MountCmd/specific-port 1.98
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.66
150 TestFunctional/parallel/ServiceCmd/List 1.68
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 161.79
163 TestMultiControlPlane/serial/DeployApp 3.97
164 TestMultiControlPlane/serial/PingHostFromPods 0.99
165 TestMultiControlPlane/serial/AddWorkerNode 56.12
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.82
168 TestMultiControlPlane/serial/CopyFile 16.1
169 TestMultiControlPlane/serial/StopSecondaryNode 14.1
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.22
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.19
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.38
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
176 TestMultiControlPlane/serial/StopCluster 37.68
177 TestMultiControlPlane/serial/RestartCluster 53.73
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
179 TestMultiControlPlane/serial/AddSecondaryNode 66.96
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
185 TestJSONOutput/start/Command 66.34
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.05
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 27.08
211 TestKicCustomNetwork/use_default_bridge_network 21.4
212 TestKicExistingNetwork 27.03
213 TestKicCustomSubnet 22.94
214 TestKicStaticIP 23.43
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 46.92
219 TestMountStart/serial/StartWithMountFirst 4.64
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 7.51
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.12
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 64.98
231 TestMultiNode/serial/DeployApp2Nodes 3.36
232 TestMultiNode/serial/PingHostFrom2Pods 0.68
233 TestMultiNode/serial/AddNode 56.01
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.61
236 TestMultiNode/serial/CopyFile 9.17
237 TestMultiNode/serial/StopNode 2.18
238 TestMultiNode/serial/StartAfterStop 6.89
239 TestMultiNode/serial/RestartKeepsNodes 78.37
240 TestMultiNode/serial/DeleteNode 5.12
241 TestMultiNode/serial/StopMultiNode 28.54
242 TestMultiNode/serial/RestartMultiNode 47.03
243 TestMultiNode/serial/ValidateNameConflict 21.98
248 TestPreload 85.16
250 TestScheduledStopUnix 97.51
253 TestInsufficientStorage 12.08
254 TestRunningBinaryUpgrade 68
256 TestKubernetesUpgrade 302.53
257 TestMissingContainerUpgrade 68.27
259 TestPause/serial/Start 53.19
260 TestStoppedBinaryUpgrade/Setup 0.46
261 TestStoppedBinaryUpgrade/Upgrade 99.13
262 TestPause/serial/SecondStartNoReconfiguration 21.89
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/StartWithK8s 26.69
278 TestNoKubernetes/serial/StartWithStopK8s 16.4
283 TestNetworkPlugins/group/false 3.34
287 TestNoKubernetes/serial/Start 6.67
288 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
290 TestNoKubernetes/serial/ProfileList 16.44
292 TestStartStop/group/old-k8s-version/serial/FirstStart 46.76
293 TestNoKubernetes/serial/Stop 1.55
294 TestNoKubernetes/serial/StartNoArgs 6.29
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
297 TestStartStop/group/no-preload/serial/FirstStart 49.57
298 TestStartStop/group/old-k8s-version/serial/DeployApp 7.24
300 TestStartStop/group/old-k8s-version/serial/Stop 15.99
301 TestStartStop/group/no-preload/serial/DeployApp 7.21
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/old-k8s-version/serial/SecondStart 44.17
305 TestStartStop/group/no-preload/serial/Stop 16.29
307 TestStartStop/group/embed-certs/serial/FirstStart 69.86
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
309 TestStartStop/group/no-preload/serial/SecondStart 44.46
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
320 TestStartStop/group/embed-certs/serial/DeployApp 8.24
322 TestStartStop/group/newest-cni/serial/FirstStart 29.15
324 TestStartStop/group/embed-certs/serial/Stop 16.21
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
327 TestStartStop/group/embed-certs/serial/SecondStart 43.69
328 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/Stop 8.02
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.45
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
334 TestStartStop/group/newest-cni/serial/SecondStart 10.43
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.35
341 TestNetworkPlugins/group/auto/Start 44.45
342 TestNetworkPlugins/group/kindnet/Start 43.19
343 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
344 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
345 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
347 TestNetworkPlugins/group/calico/Start 52.94
348 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 8.18
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/auto/DNS 0.13
356 TestNetworkPlugins/group/auto/Localhost 0.1
357 TestNetworkPlugins/group/auto/HairPin 0.09
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
359 TestNetworkPlugins/group/kindnet/NetCatPod 8.19
360 TestNetworkPlugins/group/custom-flannel/Start 58.13
361 TestNetworkPlugins/group/kindnet/DNS 0.16
362 TestNetworkPlugins/group/kindnet/Localhost 0.09
363 TestNetworkPlugins/group/kindnet/HairPin 0.08
364 TestNetworkPlugins/group/enable-default-cni/Start 69.77
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/Start 51.32
367 TestNetworkPlugins/group/calico/KubeletFlags 0.34
368 TestNetworkPlugins/group/calico/NetCatPod 10.77
369 TestNetworkPlugins/group/calico/DNS 0.12
370 TestNetworkPlugins/group/calico/Localhost 0.1
371 TestNetworkPlugins/group/calico/HairPin 0.08
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
374 TestNetworkPlugins/group/bridge/Start 42.6
375 TestNetworkPlugins/group/custom-flannel/DNS 0.1
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.08
378 TestNetworkPlugins/group/flannel/ControllerPod 6
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
380 TestNetworkPlugins/group/flannel/NetCatPod 9.18
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.23
383 TestNetworkPlugins/group/flannel/DNS 0.11
384 TestNetworkPlugins/group/flannel/Localhost 0.08
385 TestNetworkPlugins/group/flannel/HairPin 0.08
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
390 TestNetworkPlugins/group/bridge/NetCatPod 8.21
391 TestNetworkPlugins/group/bridge/DNS 0.11
392 TestNetworkPlugins/group/bridge/Localhost 0.09
393 TestNetworkPlugins/group/bridge/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (4.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-706420 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-706420 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.80143351s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 23:46:12.147634   14585 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1121 23:46:12.147738   14585 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-706420
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-706420: exit status 85 (71.249772ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-706420 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-706420 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:07.398601   14597 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:07.399267   14597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:07.399277   14597 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:07.399282   14597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:07.399455   14597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	W1121 23:46:07.399561   14597 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21934-9122/.minikube/config/config.json: open /home/jenkins/minikube-integration/21934-9122/.minikube/config/config.json: no such file or directory
	I1121 23:46:07.400005   14597 out.go:368] Setting JSON to true
	I1121 23:46:07.400850   14597 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1716,"bootTime":1763767051,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:07.400914   14597 start.go:143] virtualization: kvm guest
	I1121 23:46:07.405106   14597 out.go:99] [download-only-706420] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1121 23:46:07.405240   14597 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 23:46:07.405278   14597 notify.go:221] Checking for updates...
	I1121 23:46:07.406394   14597 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:07.407551   14597 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:07.408601   14597 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:07.409532   14597 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:46:07.410564   14597 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 23:46:07.412401   14597 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:46:07.412594   14597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:07.438453   14597 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:07.438524   14597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:07.810537   14597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 23:46:07.801720497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:07.810652   14597 docker.go:319] overlay module found
	I1121 23:46:07.812077   14597 out.go:99] Using the docker driver based on user configuration
	I1121 23:46:07.812106   14597 start.go:309] selected driver: docker
	I1121 23:46:07.812114   14597 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:07.812221   14597 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:07.867840   14597 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 23:46:07.858632975 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:07.868022   14597 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:07.868558   14597 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 23:46:07.868722   14597 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:46:07.870294   14597 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-706420 host does not exist
	  To start a cluster, run: "minikube start -p download-only-706420"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-706420
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-756396 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-756396 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.738453453s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 23:46:16.302594   14585 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 23:46:16.302628   14585 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-756396
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-756396: exit status 85 (68.941997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-706420 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-706420 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-706420                                                                                                                                                   │ download-only-706420 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ -o=json --download-only -p download-only-756396 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-756396 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:12.613806   14959 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:12.613999   14959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:12.614007   14959 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:12.614011   14959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:12.614181   14959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:46:12.614589   14959 out.go:368] Setting JSON to true
	I1121 23:46:12.615305   14959 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1722,"bootTime":1763767051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:12.615358   14959 start.go:143] virtualization: kvm guest
	I1121 23:46:12.617096   14959 out.go:99] [download-only-756396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:12.617204   14959 notify.go:221] Checking for updates...
	I1121 23:46:12.618242   14959 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:12.619447   14959 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:12.620557   14959 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:46:12.621538   14959 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:46:12.622555   14959 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 23:46:12.624386   14959 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:46:12.624589   14959 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:12.646432   14959 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:12.646490   14959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:12.699661   14959 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-21 23:46:12.690801631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:12.699750   14959 docker.go:319] overlay module found
	I1121 23:46:12.701237   14959 out.go:99] Using the docker driver based on user configuration
	I1121 23:46:12.701268   14959 start.go:309] selected driver: docker
	I1121 23:46:12.701276   14959 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:12.701347   14959 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:12.754508   14959 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-21 23:46:12.745952764 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:12.754684   14959 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:12.755155   14959 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 23:46:12.755304   14959 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:46:12.756846   14959 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-756396 host does not exist
	  To start a cluster, run: "minikube start -p download-only-756396"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-756396
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-688978 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-688978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-688978
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 23:46:17.375161   14585 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-604163 --alsologtostderr --binary-mirror http://127.0.0.1:33157 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-604163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-604163
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (57.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-033967 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-033967 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (53.397474115s)
helpers_test.go:175: Cleaning up "offline-crio-033967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-033967
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-033967: (4.193427389s)
--- PASS: TestOffline (57.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-386094
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-386094: exit status 85 (62.622559ms)

                                                
                                                
-- stdout --
	* Profile "addons-386094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-386094
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-386094: exit status 85 (63.32334ms)

                                                
                                                
-- stdout --
	* Profile "addons-386094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (120.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-386094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-386094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m0.687988175s)
--- PASS: TestAddons/Setup (120.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-386094 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-386094 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-386094 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-386094 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [36e8d411-9a35-4c79-b1f5-8e3e5c5fc9c1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003084163s
addons_test.go:694: (dbg) Run:  kubectl --context addons-386094 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-386094 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-386094 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-386094
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-386094: (16.315930939s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-386094
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-386094
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-386094
--- PASS: TestAddons/StoppedEnableDisable (16.59s)

                                                
                                    
x
+
TestCertOptions (26.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-524062 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-524062 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.825639196s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-524062 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-524062 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-524062 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-524062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-524062
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-524062: (2.292308172s)
--- PASS: TestCertOptions (26.74s)

                                                
                                    
x
+
TestCertExpiration (215.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-624739 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1122 00:28:19.474257   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-624739 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.751993568s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-624739 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.000727958s)
helpers_test.go:175: Cleaning up "cert-expiration-624739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-624739
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-624739: (2.464456129s)
--- PASS: TestCertExpiration (215.22s)

                                                
                                    
x
+
TestForceSystemdFlag (25.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-791875 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-791875 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.601601606s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-791875 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-791875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-791875
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-791875: (2.509749268s)
--- PASS: TestForceSystemdFlag (25.42s)

                                                
                                    
x
+
TestForceSystemdEnv (39.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-087837 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-087837 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.917862228s)
helpers_test.go:175: Cleaning up "force-systemd-env-087837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-087837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-087837: (3.606402455s)
--- PASS: TestForceSystemdEnv (39.52s)

                                                
                                    
x
+
TestErrorSpam/setup (22.17s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-745118 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-745118 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-745118 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-745118 --driver=docker  --container-runtime=crio: (22.166734459s)
--- PASS: TestErrorSpam/setup (22.17s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (5.94s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause: exit status 80 (1.755308557s)

                                                
                                                
-- stdout --
	* Pausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause: exit status 80 (2.026158988s)

                                                
                                                
-- stdout --
	* Pausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause: exit status 80 (2.160672769s)

                                                
                                                
-- stdout --
	* Pausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.94s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause: exit status 80 (2.099460289s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:52:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause: exit status 80 (1.806884535s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:52:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause: exit status 80 (1.674455932s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-745118 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:52:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.58s)

                                                
                                    
x
+
TestErrorSpam/stop (12.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 stop: (12.343663368s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-745118 --log_dir /tmp/nospam-745118 stop
--- PASS: TestErrorSpam/stop (12.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21934-9122/.minikube/files/etc/test/nested/copy/14585/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (70.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1121 23:53:19.482075   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.488457   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.499777   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.521106   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.562400   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.643734   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:19.805183   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:20.126806   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:20.768248   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:22.049811   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:24.611161   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:53:29.732804   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-159819 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m10.296559299s)
--- PASS: TestFunctional/serial/StartWithProxy (70.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 23:53:31.779567   14585 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-159819 --alsologtostderr -v=8: (5.763147989s)
functional_test.go:678: soft start took 5.76389617s for "functional-159819" cluster.
I1121 23:53:37.543012   14585 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-159819 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache add registry.k8s.io/pause:latest
E1121 23:53:39.974452   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-159819 /tmp/TestFunctionalserialCacheCmdcacheadd_local3734732903/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache add minikube-local-cache-test:functional-159819
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache delete minikube-local-cache-test:functional-159819
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-159819
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (266.854463ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 kubectl -- --context functional-159819 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-159819 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1121 23:54:00.455890   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-159819 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.803242362s)
functional_test.go:776: restart took 39.803360617s for "functional-159819" cluster.
I1121 23:54:23.404490   14585 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (39.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-159819 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 logs: (1.134728419s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 logs --file /tmp/TestFunctionalserialLogsFileCmd3358985709/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 logs --file /tmp/TestFunctionalserialLogsFileCmd3358985709/001/logs.txt: (1.151841765s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.52s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-159819 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-159819
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-159819: exit status 115 (320.064361ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30331 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-159819 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-159819 delete -f testdata/invalidsvc.yaml: (1.008763023s)
--- PASS: TestFunctional/serial/InvalidService (4.52s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 config get cpus: exit status 14 (90.20292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 config get cpus: exit status 14 (91.395316ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-159819 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-159819 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51950: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-159819 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (167.152789ms)

                                                
                                                
-- stdout --
	* [functional-159819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:54:57.585321   51036 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:57.585413   51036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:57.585423   51036 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:57.585428   51036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:57.585619   51036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:54:57.585990   51036 out.go:368] Setting JSON to false
	I1121 23:54:57.587082   51036 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2247,"bootTime":1763767051,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:54:57.587210   51036 start.go:143] virtualization: kvm guest
	I1121 23:54:57.593044   51036 out.go:179] * [functional-159819] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:54:57.595722   51036 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:54:57.595725   51036 notify.go:221] Checking for updates...
	I1121 23:54:57.596872   51036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:54:57.598667   51036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:54:57.599967   51036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:54:57.601080   51036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:54:57.602179   51036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:54:57.603580   51036 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:54:57.604148   51036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:54:57.629199   51036 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:54:57.629317   51036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:54:57.688195   51036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 23:54:57.678008597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:54:57.688338   51036 docker.go:319] overlay module found
	I1121 23:54:57.689998   51036 out.go:179] * Using the docker driver based on existing profile
	I1121 23:54:57.691155   51036 start.go:309] selected driver: docker
	I1121 23:54:57.691170   51036 start.go:930] validating driver "docker" against &{Name:functional-159819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-159819 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:54:57.691289   51036 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:54:57.692973   51036 out.go:203] 
	W1121 23:54:57.693990   51036 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 23:54:57.695113   51036 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159819 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-159819 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (162.487254ms)

                                                
                                                
-- stdout --
	* [functional-159819] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:54:58.867309   51639 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:58.867388   51639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:58.867392   51639 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:58.867395   51639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:58.867643   51639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1121 23:54:58.868040   51639 out.go:368] Setting JSON to false
	I1121 23:54:58.868936   51639 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2248,"bootTime":1763767051,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:54:58.868995   51639 start.go:143] virtualization: kvm guest
	I1121 23:54:58.870972   51639 out.go:179] * [functional-159819] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1121 23:54:58.872036   51639 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:54:58.872044   51639 notify.go:221] Checking for updates...
	I1121 23:54:58.873991   51639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:54:58.875129   51639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1121 23:54:58.876119   51639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1121 23:54:58.877105   51639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:54:58.878128   51639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:54:58.879518   51639 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:54:58.880029   51639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:54:58.903237   51639 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:54:58.903332   51639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:54:58.961962   51639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 23:54:58.952264927 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:54:58.962113   51639 docker.go:319] overlay module found
	I1121 23:54:58.963680   51639 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 23:54:58.964643   51639 start.go:309] selected driver: docker
	I1121 23:54:58.964661   51639 start.go:930] validating driver "docker" against &{Name:functional-159819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-159819 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:54:58.964767   51639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:54:58.966607   51639 out.go:203] 
	W1121 23:54:58.967707   51639 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 23:54:58.968687   51639 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [82839e6e-ee21-42e7-9fb5-e2d8d4d5e443] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003473191s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-159819 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-159819 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-159819 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-159819 apply -f testdata/storage-provisioner/pod.yaml
I1121 23:54:38.904250   14585 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [51325716-843f-4bd5-af13-e818ff6d3031] Pending
helpers_test.go:352: "sp-pod" [51325716-843f-4bd5-af13-e818ff6d3031] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1121 23:54:41.417201   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [51325716-843f-4bd5-af13-e818ff6d3031] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003909011s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-159819 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-159819 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-159819 delete -f testdata/storage-provisioner/pod.yaml: (1.323260085s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-159819 apply -f testdata/storage-provisioner/pod.yaml
I1121 23:54:50.451339   14585 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5a5e4692-e069-4e2e-a08d-0ba6fcce64fc] Pending
helpers_test.go:352: "sp-pod" [5a5e4692-e069-4e2e-a08d-0ba6fcce64fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5a5e4692-e069-4e2e-a08d-0ba6fcce64fc] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003119876s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-159819 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh -n functional-159819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cp functional-159819:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1355586297/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh -n functional-159819 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh -n functional-159819 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-159819 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9bkv6" [816b67cd-7b22-4212-b65a-86ae177fc4c5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-9bkv6" [816b67cd-7b22-4212-b65a-86ae177fc4c5] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.002880335s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;": exit status 1 (81.960751ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 23:54:57.076671   14585 retry.go:31] will retry after 512.688969ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;": exit status 1 (97.121523ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 23:54:57.686875   14585 retry.go:31] will retry after 872.021156ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;": exit status 1 (96.565331ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 23:54:58.656657   14585 retry.go:31] will retry after 2.113276101s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-159819 exec mysql-5bb876957f-9bkv6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14585/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /etc/test/nested/copy/14585/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14585.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /etc/ssl/certs/14585.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14585.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /usr/share/ca-certificates/14585.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /etc/ssl/certs/145852.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145852.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /usr/share/ca-certificates/145852.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-159819 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "sudo systemctl is-active docker": exit status 1 (302.907588ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "sudo systemctl is-active containerd": exit status 1 (306.040341ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159819 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159819 image ls --format short --alsologtostderr:
I1121 23:55:06.947487   53172 out.go:360] Setting OutFile to fd 1 ...
I1121 23:55:06.947757   53172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:06.947768   53172 out.go:374] Setting ErrFile to fd 2...
I1121 23:55:06.947772   53172 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:06.947958   53172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
I1121 23:55:06.948488   53172 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:06.948581   53172 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:06.948998   53172 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
I1121 23:55:06.966432   53172 ssh_runner.go:195] Run: systemctl --version
I1121 23:55:06.966481   53172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
I1121 23:55:06.983434   53172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
I1121 23:55:07.070824   53172 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159819 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/my-image                      │ functional-159819  │ cd2e1a52f8337 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159819 image ls --format table --alsologtostderr:
I1121 23:55:09.616837   54413 out.go:360] Setting OutFile to fd 1 ...
I1121 23:55:09.616940   54413 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:09.616948   54413 out.go:374] Setting ErrFile to fd 2...
I1121 23:55:09.616953   54413 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:09.617132   54413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
I1121 23:55:09.617616   54413 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:09.617702   54413 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:09.618107   54413 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
I1121 23:55:09.635908   54413 ssh_runner.go:195] Run: systemctl --version
I1121 23:55:09.635956   54413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
I1121 23:55:09.653666   54413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
I1121 23:55:09.743675   54413 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159819 image ls --format json --alsologtostderr:
[{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repo
Tags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"5b179a216b3183c29d3a954035a6cc0f8363505b1b17d90df728f0caaf1fafd1","repoDigests":["docker.io/library/74c9676f4f2f5e1c695262815c94e7202323652e8dfce9ad4489097020e6123e-tmp@sha256:ece93f3e1a831269ceaf56be3cc170844171
09d581d1a6c571c38337647840db"],"repoTags":[],"size":"1466132"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDig
ests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58
ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.
34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":
["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"cd2e1a52f833755646a0eb59db23f08da3d348aa93e0f041c6d1d5655c4354e0","repoDigests":["localhost/my-image@sha256:344968156667ddf624c454bfea4269d1d24d6a0f71c2c46bdc76400c9895d0a4"],"repoTags":["localhost/my-image:functional-159819"],"size":"1468744"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159819 image ls --format json --alsologtostderr:
I1121 23:55:09.406338   54355 out.go:360] Setting OutFile to fd 1 ...
I1121 23:55:09.406442   54355 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:09.406451   54355 out.go:374] Setting ErrFile to fd 2...
I1121 23:55:09.406455   54355 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:09.406692   54355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
I1121 23:55:09.407212   54355 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:09.407308   54355 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:09.407731   54355 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
I1121 23:55:09.425264   54355 ssh_runner.go:195] Run: systemctl --version
I1121 23:55:09.425317   54355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
I1121 23:55:09.441790   54355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
I1121 23:55:09.528949   54355 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159819 image ls --format yaml --alsologtostderr:
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159819 image ls --format yaml --alsologtostderr:
I1121 23:55:07.156606   53227 out.go:360] Setting OutFile to fd 1 ...
I1121 23:55:07.157266   53227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:07.157280   53227 out.go:374] Setting ErrFile to fd 2...
I1121 23:55:07.157287   53227 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:07.157848   53227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
I1121 23:55:07.158642   53227 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:07.158755   53227 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:07.159289   53227 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
I1121 23:55:07.178258   53227 ssh_runner.go:195] Run: systemctl --version
I1121 23:55:07.178302   53227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
I1121 23:55:07.194242   53227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
I1121 23:55:07.280890   53227 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh pgrep buildkitd: exit status 1 (256.404521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image build -t localhost/my-image:functional-159819 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 image build -t localhost/my-image:functional-159819 testdata/build --alsologtostderr: (1.575219396s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159819 image build -t localhost/my-image:functional-159819 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5b179a216b3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-159819
--> cd2e1a52f83
Successfully tagged localhost/my-image:functional-159819
cd2e1a52f833755646a0eb59db23f08da3d348aa93e0f041c6d1d5655c4354e0
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159819 image build -t localhost/my-image:functional-159819 testdata/build --alsologtostderr:
I1121 23:55:07.622126   53405 out.go:360] Setting OutFile to fd 1 ...
I1121 23:55:07.622405   53405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:07.622415   53405 out.go:374] Setting ErrFile to fd 2...
I1121 23:55:07.622418   53405 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:55:07.622626   53405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
I1121 23:55:07.623180   53405 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:07.624158   53405 config.go:182] Loaded profile config "functional-159819": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1121 23:55:07.625088   53405 cli_runner.go:164] Run: docker container inspect functional-159819 --format={{.State.Status}}
I1121 23:55:07.642532   53405 ssh_runner.go:195] Run: systemctl --version
I1121 23:55:07.642581   53405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-159819
I1121 23:55:07.659092   53405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/functional-159819/id_rsa Username:docker}
I1121 23:55:07.748221   53405 build_images.go:162] Building image from path: /tmp/build.872833916.tar
I1121 23:55:07.748309   53405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 23:55:07.756255   53405 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.872833916.tar
I1121 23:55:07.759691   53405 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.872833916.tar: stat -c "%s %y" /var/lib/minikube/build/build.872833916.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.872833916.tar': No such file or directory
I1121 23:55:07.759718   53405 ssh_runner.go:362] scp /tmp/build.872833916.tar --> /var/lib/minikube/build/build.872833916.tar (3072 bytes)
I1121 23:55:07.777031   53405 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.872833916
I1121 23:55:07.784701   53405 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.872833916 -xf /var/lib/minikube/build/build.872833916.tar
I1121 23:55:07.792336   53405 crio.go:315] Building image: /var/lib/minikube/build/build.872833916
I1121 23:55:07.792385   53405 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-159819 /var/lib/minikube/build/build.872833916 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1121 23:55:09.122080   53405 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-159819 /var/lib/minikube/build/build.872833916 --cgroup-manager=cgroupfs: (1.329644313s)
I1121 23:55:09.122161   53405 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.872833916
I1121 23:55:09.129993   53405 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.872833916.tar
I1121 23:55:09.137159   53405 build_images.go:218] Built localhost/my-image:functional-159819 from /tmp/build.872833916.tar
I1121 23:55:09.137184   53405 build_images.go:134] succeeded building to: functional-159819
I1121 23:55:09.137188   53405 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-159819
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 48462: os: process already finished
helpers_test.go:525: unable to kill pid 48247: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-159819 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [249ba6c4-a0e2-4775-9d9f-2341abe4ea99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [249ba6c4-a0e2-4775-9d9f-2341abe4ea99] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003454095s
I1121 23:54:42.253426   14585 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image rm kicbase/echo-server:functional-159819 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-159819 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.214.18 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-159819 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "319.325314ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.397457ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "318.432031ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.361944ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdany-port4279444175/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763769300861223809" to /tmp/TestFunctionalparallelMountCmdany-port4279444175/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763769300861223809" to /tmp/TestFunctionalparallelMountCmdany-port4279444175/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763769300861223809" to /tmp/TestFunctionalparallelMountCmdany-port4279444175/001/test-1763769300861223809
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (314.292879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:55:01.175856   14585 retry.go:31] will retry after 458.069538ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 23:55 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 23:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 23:55 test-1763769300861223809
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh cat /mount-9p/test-1763769300861223809
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-159819 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d1fccc36-68fc-4ee4-a87f-3cf3e4de1034] Pending
helpers_test.go:352: "busybox-mount" [d1fccc36-68fc-4ee4-a87f-3cf3e4de1034] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
2025/11/21 23:55:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "busybox-mount" [d1fccc36-68fc-4ee4-a87f-3cf3e4de1034] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d1fccc36-68fc-4ee4-a87f-3cf3e4de1034] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003331898s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-159819 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdany-port4279444175/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdspecific-port1985526241/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (280.248967ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:55:08.928207   14585 retry.go:31] will retry after 715.966704ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdspecific-port1985526241/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "sudo umount -f /mount-9p": exit status 1 (255.647912ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-159819 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdspecific-port1985526241/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T" /mount1: exit status 1 (333.123639ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:55:10.962278   14585 retry.go:31] will retry after 425.733071ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-159819 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159819 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3892722256/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
E1121 23:56:03.338676   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:19.475196   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:58:47.180270   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.475302   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 service list: (1.681634589s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-159819 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-159819 service list -o json: (1.677294866s)
functional_test.go:1504: Took "1.677392985s" to run "out/minikube-linux-amd64 -p functional-159819 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-159819
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-159819
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-159819
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (161.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m41.127815621s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (161.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 kubectl -- rollout status deployment/busybox: (2.063208277s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-4lp6l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-j56xf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-wzktq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-4lp6l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-j56xf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-wzktq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-4lp6l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-j56xf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-wzktq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-4lp6l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-4lp6l -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-j56xf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-j56xf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-wzktq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 kubectl -- exec busybox-7b57f96db7-wzktq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node add --alsologtostderr -v 5
E1122 00:08:19.474841   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 node add --alsologtostderr -v 5: (55.29936361s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-529099 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp testdata/cp-test.txt ha-529099:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile244264973/001/cp-test_ha-529099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099:/home/docker/cp-test.txt ha-529099-m02:/home/docker/cp-test_ha-529099_ha-529099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test_ha-529099_ha-529099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099:/home/docker/cp-test.txt ha-529099-m03:/home/docker/cp-test_ha-529099_ha-529099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test_ha-529099_ha-529099-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099:/home/docker/cp-test.txt ha-529099-m04:/home/docker/cp-test_ha-529099_ha-529099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test_ha-529099_ha-529099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp testdata/cp-test.txt ha-529099-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile244264973/001/cp-test_ha-529099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m02:/home/docker/cp-test.txt ha-529099:/home/docker/cp-test_ha-529099-m02_ha-529099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test_ha-529099-m02_ha-529099.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m02:/home/docker/cp-test.txt ha-529099-m03:/home/docker/cp-test_ha-529099-m02_ha-529099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test_ha-529099-m02_ha-529099-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m02:/home/docker/cp-test.txt ha-529099-m04:/home/docker/cp-test_ha-529099-m02_ha-529099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test_ha-529099-m02_ha-529099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp testdata/cp-test.txt ha-529099-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile244264973/001/cp-test_ha-529099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m03:/home/docker/cp-test.txt ha-529099:/home/docker/cp-test_ha-529099-m03_ha-529099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test_ha-529099-m03_ha-529099.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m03:/home/docker/cp-test.txt ha-529099-m02:/home/docker/cp-test_ha-529099-m03_ha-529099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test_ha-529099-m03_ha-529099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m03:/home/docker/cp-test.txt ha-529099-m04:/home/docker/cp-test_ha-529099-m03_ha-529099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test_ha-529099-m03_ha-529099-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp testdata/cp-test.txt ha-529099-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile244264973/001/cp-test_ha-529099-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m04:/home/docker/cp-test.txt ha-529099:/home/docker/cp-test_ha-529099-m04_ha-529099.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099 "sudo cat /home/docker/cp-test_ha-529099-m04_ha-529099.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m04:/home/docker/cp-test.txt ha-529099-m02:/home/docker/cp-test_ha-529099-m04_ha-529099-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m02 "sudo cat /home/docker/cp-test_ha-529099-m04_ha-529099-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 cp ha-529099-m04:/home/docker/cp-test.txt ha-529099-m03:/home/docker/cp-test_ha-529099-m04_ha-529099-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 ssh -n ha-529099-m03 "sudo cat /home/docker/cp-test_ha-529099-m04_ha-529099-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 node stop m02 --alsologtostderr -v 5: (13.455937504s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5: exit status 7 (645.270238ms)

                                                
                                                
-- stdout --
	ha-529099
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-529099-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529099-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-529099-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:08:55.939429   80108 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:08:55.939689   80108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:55.939699   80108 out.go:374] Setting ErrFile to fd 2...
	I1122 00:08:55.939703   80108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:55.939880   80108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:08:55.940092   80108 out.go:368] Setting JSON to false
	I1122 00:08:55.940127   80108 mustload.go:66] Loading cluster: ha-529099
	I1122 00:08:55.940215   80108 notify.go:221] Checking for updates...
	I1122 00:08:55.940466   80108 config.go:182] Loaded profile config "ha-529099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:08:55.940480   80108 status.go:174] checking status of ha-529099 ...
	I1122 00:08:55.940957   80108 cli_runner.go:164] Run: docker container inspect ha-529099 --format={{.State.Status}}
	I1122 00:08:55.958671   80108 status.go:371] ha-529099 host status = "Running" (err=<nil>)
	I1122 00:08:55.958695   80108 host.go:66] Checking if "ha-529099" exists ...
	I1122 00:08:55.958992   80108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529099
	I1122 00:08:55.976228   80108 host.go:66] Checking if "ha-529099" exists ...
	I1122 00:08:55.976532   80108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:08:55.976578   80108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529099
	I1122 00:08:55.992766   80108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/ha-529099/id_rsa Username:docker}
	I1122 00:08:56.079869   80108 ssh_runner.go:195] Run: systemctl --version
	I1122 00:08:56.085945   80108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:08:56.097470   80108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:08:56.157490   80108 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:08:56.146411414 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:08:56.158010   80108 kubeconfig.go:125] found "ha-529099" server: "https://192.168.49.254:8443"
	I1122 00:08:56.158038   80108 api_server.go:166] Checking apiserver status ...
	I1122 00:08:56.158094   80108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:08:56.169269   80108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	W1122 00:08:56.178117   80108 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:08:56.178156   80108 ssh_runner.go:195] Run: ls
	I1122 00:08:56.181640   80108 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:08:56.185521   80108 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:08:56.185541   80108 status.go:463] ha-529099 apiserver status = Running (err=<nil>)
	I1122 00:08:56.185549   80108 status.go:176] ha-529099 status: &{Name:ha-529099 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:08:56.185563   80108 status.go:174] checking status of ha-529099-m02 ...
	I1122 00:08:56.185775   80108 cli_runner.go:164] Run: docker container inspect ha-529099-m02 --format={{.State.Status}}
	I1122 00:08:56.202649   80108 status.go:371] ha-529099-m02 host status = "Stopped" (err=<nil>)
	I1122 00:08:56.202668   80108 status.go:384] host is not running, skipping remaining checks
	I1122 00:08:56.202673   80108 status.go:176] ha-529099-m02 status: &{Name:ha-529099-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:08:56.202692   80108 status.go:174] checking status of ha-529099-m03 ...
	I1122 00:08:56.202981   80108 cli_runner.go:164] Run: docker container inspect ha-529099-m03 --format={{.State.Status}}
	I1122 00:08:56.219827   80108 status.go:371] ha-529099-m03 host status = "Running" (err=<nil>)
	I1122 00:08:56.219844   80108 host.go:66] Checking if "ha-529099-m03" exists ...
	I1122 00:08:56.220133   80108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529099-m03
	I1122 00:08:56.236434   80108 host.go:66] Checking if "ha-529099-m03" exists ...
	I1122 00:08:56.236644   80108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:08:56.236687   80108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529099-m03
	I1122 00:08:56.252968   80108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/ha-529099-m03/id_rsa Username:docker}
	I1122 00:08:56.338890   80108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:08:56.350748   80108 kubeconfig.go:125] found "ha-529099" server: "https://192.168.49.254:8443"
	I1122 00:08:56.350773   80108 api_server.go:166] Checking apiserver status ...
	I1122 00:08:56.350802   80108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:08:56.361376   80108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W1122 00:08:56.368950   80108 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:08:56.368991   80108 ssh_runner.go:195] Run: ls
	I1122 00:08:56.372299   80108 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:08:56.376477   80108 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:08:56.376496   80108 status.go:463] ha-529099-m03 apiserver status = Running (err=<nil>)
	I1122 00:08:56.376504   80108 status.go:176] ha-529099-m03 status: &{Name:ha-529099-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:08:56.376520   80108 status.go:174] checking status of ha-529099-m04 ...
	I1122 00:08:56.376740   80108 cli_runner.go:164] Run: docker container inspect ha-529099-m04 --format={{.State.Status}}
	I1122 00:08:56.394225   80108 status.go:371] ha-529099-m04 host status = "Running" (err=<nil>)
	I1122 00:08:56.394243   80108 host.go:66] Checking if "ha-529099-m04" exists ...
	I1122 00:08:56.394480   80108 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-529099-m04
	I1122 00:08:56.411036   80108 host.go:66] Checking if "ha-529099-m04" exists ...
	I1122 00:08:56.411298   80108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:08:56.411335   80108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-529099-m04
	I1122 00:08:56.427520   80108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/ha-529099-m04/id_rsa Username:docker}
	I1122 00:08:56.513590   80108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:08:56.524940   80108 status.go:176] ha-529099-m04 status: &{Name:ha-529099-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 node start m02 --alsologtostderr -v 5: (7.351630533s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 stop --alsologtostderr -v 5
E1122 00:09:31.052513   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.059075   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.070455   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.091815   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.133270   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.214769   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.376281   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:31.697877   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:32.339998   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:33.621982   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:36.184040   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:41.305558   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:42.542176   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 stop --alsologtostderr -v 5: (43.044962682s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 start --wait true --alsologtostderr -v 5
E1122 00:09:51.547885   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:10:12.029274   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:10:52.991582   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 start --wait true --alsologtostderr -v 5: (1m4.015413679s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 node delete m03 --alsologtostderr -v 5: (8.63698638s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 stop --alsologtostderr -v 5: (37.562420265s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5: exit status 7 (114.686184ms)

                                                
                                                
-- stdout --
	ha-529099
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529099-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-529099-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:11:41.082183   93870 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:11:41.082426   93870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:11:41.082433   93870 out.go:374] Setting ErrFile to fd 2...
	I1122 00:11:41.082437   93870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:11:41.082615   93870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:11:41.082780   93870 out.go:368] Setting JSON to false
	I1122 00:11:41.082806   93870 mustload.go:66] Loading cluster: ha-529099
	I1122 00:11:41.082912   93870 notify.go:221] Checking for updates...
	I1122 00:11:41.083150   93870 config.go:182] Loaded profile config "ha-529099": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:11:41.083180   93870 status.go:174] checking status of ha-529099 ...
	I1122 00:11:41.083673   93870 cli_runner.go:164] Run: docker container inspect ha-529099 --format={{.State.Status}}
	I1122 00:11:41.102861   93870 status.go:371] ha-529099 host status = "Stopped" (err=<nil>)
	I1122 00:11:41.102893   93870 status.go:384] host is not running, skipping remaining checks
	I1122 00:11:41.102911   93870 status.go:176] ha-529099 status: &{Name:ha-529099 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:11:41.102949   93870 status.go:174] checking status of ha-529099-m02 ...
	I1122 00:11:41.103227   93870 cli_runner.go:164] Run: docker container inspect ha-529099-m02 --format={{.State.Status}}
	I1122 00:11:41.119702   93870 status.go:371] ha-529099-m02 host status = "Stopped" (err=<nil>)
	I1122 00:11:41.119718   93870 status.go:384] host is not running, skipping remaining checks
	I1122 00:11:41.119723   93870 status.go:176] ha-529099-m02 status: &{Name:ha-529099-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:11:41.119741   93870 status.go:174] checking status of ha-529099-m04 ...
	I1122 00:11:41.119951   93870 cli_runner.go:164] Run: docker container inspect ha-529099-m04 --format={{.State.Status}}
	I1122 00:11:41.135397   93870 status.go:371] ha-529099-m04 host status = "Stopped" (err=<nil>)
	I1122 00:11:41.135435   93870 status.go:384] host is not running, skipping remaining checks
	I1122 00:11:41.135454   93870 status.go:176] ha-529099-m04 status: &{Name:ha-529099-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1122 00:12:14.914306   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.995817777s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 node add --control-plane --alsologtostderr -v 5
E1122 00:13:19.475403   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-529099 node add --control-plane --alsologtostderr -v 5: (1m6.152654791s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-529099 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-703168 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1122 00:14:31.052648   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-703168 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m6.337963292s)
--- PASS: TestJSONOutput/start/Command (66.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-703168 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-703168 --output=json --user=testUser: (6.046317631s)
--- PASS: TestJSONOutput/stop/Command (6.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-662862 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-662862 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.076868ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d76c9cc0-9677-427d-8d78-57bdb525d04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-662862] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8ffb8e2-953a-44f6-8919-394e7664bc6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"e4336cf0-736c-40ea-b310-0149e726e882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"451a329a-8220-4364-b4f0-a5c63ff3e512","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig"}}
	{"specversion":"1.0","id":"56e4a398-f209-4d6c-b387-16f8af5ae390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube"}}
	{"specversion":"1.0","id":"7e9c0204-a4fb-43b1-9393-00409bf82617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"235c47df-9ab6-443f-8c0b-61aee5d2f3d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"62b95b1f-a875-44ea-944b-fa8e02a501f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-662862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-662862
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-151638 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-151638 --network=: (24.980166884s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-151638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-151638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-151638: (2.078177041s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.08s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-763396 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-763396 --network=bridge: (19.420252611s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-763396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-763396
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-763396: (1.961231192s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.40s)

                                                
                                    
x
+
TestKicExistingNetwork (27.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1122 00:16:01.040095   14585 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1122 00:16:01.055429   14585 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1122 00:16:01.055504   14585 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1122 00:16:01.055534   14585 cli_runner.go:164] Run: docker network inspect existing-network
W1122 00:16:01.071815   14585 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1122 00:16:01.071841   14585 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1122 00:16:01.071853   14585 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1122 00:16:01.072004   14585 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1122 00:16:01.087672   14585 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-680fbf0b84de IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8e:e6:2f:e5:eb:9f} reservation:<nil>}
I1122 00:16:01.088021   14585 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000615f40}
I1122 00:16:01.088070   14585 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1122 00:16:01.088123   14585 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1122 00:16:01.132632   14585 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-650330 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-650330 --network=existing-network: (24.943963328s)
helpers_test.go:175: Cleaning up "existing-network-650330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-650330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-650330: (1.959560056s)
I1122 00:16:28.052405   14585 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.03s)

                                                
                                    
x
+
TestKicCustomSubnet (22.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-813052 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-813052 --subnet=192.168.60.0/24: (20.84339139s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-813052 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-813052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-813052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-813052: (2.081390812s)
--- PASS: TestKicCustomSubnet (22.94s)

                                                
                                    
x
+
TestKicStaticIP (23.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-771902 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-771902 --static-ip=192.168.200.200: (21.17876061s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-771902 ip
helpers_test.go:175: Cleaning up "static-ip-771902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-771902
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-771902: (2.104922528s)
--- PASS: TestKicStaticIP (23.43s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (46.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-225464 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-225464 --driver=docker  --container-runtime=crio: (19.861919572s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-227539 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-227539 --driver=docker  --container-runtime=crio: (21.336883688s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-225464
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-227539
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-227539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-227539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-227539: (2.282150036s)
helpers_test.go:175: Cleaning up "first-225464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-225464
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-225464: (2.271279626s)
--- PASS: TestMinikubeProfile (46.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-936709 --memory=3072 --mount-string /tmp/TestMountStartserial1824638456/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-936709 --memory=3072 --mount-string /tmp/TestMountStartserial1824638456/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.636406681s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-936709 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-954614 --memory=3072 --mount-string /tmp/TestMountStartserial1824638456/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-954614 --memory=3072 --mount-string /tmp/TestMountStartserial1824638456/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.511537517s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-954614 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-936709 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-936709 --alsologtostderr -v=5: (1.630785205s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-954614 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-954614
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-954614: (1.242431517s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.12s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-954614
E1122 00:18:19.475442   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-954614: (6.115769298s)
--- PASS: TestMountStart/serial/RestartStopped (7.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-954614 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-017915 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-017915 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.531999016s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
E1122 00:19:31.051703   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-017915 -- rollout status deployment/busybox: (1.969198202s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-lpq79 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-p4n47 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-lpq79 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-p4n47 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-lpq79 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-p4n47 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-lpq79 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-lpq79 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-p4n47 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-017915 -- exec busybox-7b57f96db7-p4n47 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-017915 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-017915 -v=5 --alsologtostderr: (55.407946394s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.01s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-017915 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp testdata/cp-test.txt multinode-017915:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1463968704/001/cp-test_multinode-017915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915:/home/docker/cp-test.txt multinode-017915-m02:/home/docker/cp-test_multinode-017915_multinode-017915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test_multinode-017915_multinode-017915-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915:/home/docker/cp-test.txt multinode-017915-m03:/home/docker/cp-test_multinode-017915_multinode-017915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test_multinode-017915_multinode-017915-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp testdata/cp-test.txt multinode-017915-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1463968704/001/cp-test_multinode-017915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m02:/home/docker/cp-test.txt multinode-017915:/home/docker/cp-test_multinode-017915-m02_multinode-017915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test_multinode-017915-m02_multinode-017915.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m02:/home/docker/cp-test.txt multinode-017915-m03:/home/docker/cp-test_multinode-017915-m02_multinode-017915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test_multinode-017915-m02_multinode-017915-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp testdata/cp-test.txt multinode-017915-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1463968704/001/cp-test_multinode-017915-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m03:/home/docker/cp-test.txt multinode-017915:/home/docker/cp-test_multinode-017915-m03_multinode-017915.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915 "sudo cat /home/docker/cp-test_multinode-017915-m03_multinode-017915.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 cp multinode-017915-m03:/home/docker/cp-test.txt multinode-017915-m02:/home/docker/cp-test_multinode-017915-m03_multinode-017915-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 ssh -n multinode-017915-m02 "sudo cat /home/docker/cp-test_multinode-017915-m03_multinode-017915-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-017915 node stop m03: (1.259129888s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-017915 status: exit status 7 (462.911721ms)

                                                
                                                
-- stdout --
	multinode-017915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-017915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-017915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr: exit status 7 (460.411814ms)

                                                
                                                
-- stdout --
	multinode-017915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-017915-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-017915-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:20:43.046254  154244 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:43.046488  154244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:43.046497  154244 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:43.046501  154244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:43.046713  154244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:20:43.046865  154244 out.go:368] Setting JSON to false
	I1122 00:20:43.046895  154244 mustload.go:66] Loading cluster: multinode-017915
	I1122 00:20:43.047004  154244 notify.go:221] Checking for updates...
	I1122 00:20:43.047360  154244 config.go:182] Loaded profile config "multinode-017915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:20:43.047382  154244 status.go:174] checking status of multinode-017915 ...
	I1122 00:20:43.047949  154244 cli_runner.go:164] Run: docker container inspect multinode-017915 --format={{.State.Status}}
	I1122 00:20:43.065491  154244 status.go:371] multinode-017915 host status = "Running" (err=<nil>)
	I1122 00:20:43.065525  154244 host.go:66] Checking if "multinode-017915" exists ...
	I1122 00:20:43.065724  154244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-017915
	I1122 00:20:43.081462  154244 host.go:66] Checking if "multinode-017915" exists ...
	I1122 00:20:43.081671  154244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:43.081719  154244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-017915
	I1122 00:20:43.097723  154244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/multinode-017915/id_rsa Username:docker}
	I1122 00:20:43.183661  154244 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:43.189496  154244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:43.200897  154244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:43.257763  154244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-22 00:20:43.248040531 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:43.258374  154244 kubeconfig.go:125] found "multinode-017915" server: "https://192.168.67.2:8443"
	I1122 00:20:43.258406  154244 api_server.go:166] Checking apiserver status ...
	I1122 00:20:43.258442  154244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:43.269828  154244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	W1122 00:20:43.277791  154244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:20:43.277847  154244 ssh_runner.go:195] Run: ls
	I1122 00:20:43.281163  154244 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1122 00:20:43.285307  154244 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1122 00:20:43.285331  154244 status.go:463] multinode-017915 apiserver status = Running (err=<nil>)
	I1122 00:20:43.285343  154244 status.go:176] multinode-017915 status: &{Name:multinode-017915 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:20:43.285372  154244 status.go:174] checking status of multinode-017915-m02 ...
	I1122 00:20:43.285620  154244 cli_runner.go:164] Run: docker container inspect multinode-017915-m02 --format={{.State.Status}}
	I1122 00:20:43.302361  154244 status.go:371] multinode-017915-m02 host status = "Running" (err=<nil>)
	I1122 00:20:43.302376  154244 host.go:66] Checking if "multinode-017915-m02" exists ...
	I1122 00:20:43.302583  154244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-017915-m02
	I1122 00:20:43.318287  154244 host.go:66] Checking if "multinode-017915-m02" exists ...
	I1122 00:20:43.318493  154244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:43.318523  154244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-017915-m02
	I1122 00:20:43.334523  154244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21934-9122/.minikube/machines/multinode-017915-m02/id_rsa Username:docker}
	I1122 00:20:43.420425  154244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:43.431617  154244 status.go:176] multinode-017915-m02 status: &{Name:multinode-017915-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:20:43.431640  154244 status.go:174] checking status of multinode-017915-m03 ...
	I1122 00:20:43.431904  154244 cli_runner.go:164] Run: docker container inspect multinode-017915-m03 --format={{.State.Status}}
	I1122 00:20:43.449279  154244 status.go:371] multinode-017915-m03 host status = "Stopped" (err=<nil>)
	I1122 00:20:43.449295  154244 status.go:384] host is not running, skipping remaining checks
	I1122 00:20:43.449301  154244 status.go:176] multinode-017915-m03 status: &{Name:multinode-017915-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-017915 node start m03 -v=5 --alsologtostderr: (6.24542862s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-017915
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-017915
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-017915: (29.394142403s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-017915 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-017915 --wait=true -v=5 --alsologtostderr: (48.854470781s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-017915
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-017915 node delete m03: (4.558172072s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-017915 stop: (28.349342219s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-017915 status: exit status 7 (94.60085ms)

                                                
                                                
-- stdout --
	multinode-017915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-017915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr: exit status 7 (94.084541ms)

                                                
                                                
-- stdout --
	multinode-017915
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-017915-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:22:42.329857  164013 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:22:42.330097  164013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:42.330105  164013 out.go:374] Setting ErrFile to fd 2...
	I1122 00:22:42.330110  164013 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:42.330283  164013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:22:42.330436  164013 out.go:368] Setting JSON to false
	I1122 00:22:42.330462  164013 mustload.go:66] Loading cluster: multinode-017915
	I1122 00:22:42.330605  164013 notify.go:221] Checking for updates...
	I1122 00:22:42.330773  164013 config.go:182] Loaded profile config "multinode-017915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:22:42.330787  164013 status.go:174] checking status of multinode-017915 ...
	I1122 00:22:42.331253  164013 cli_runner.go:164] Run: docker container inspect multinode-017915 --format={{.State.Status}}
	I1122 00:22:42.349565  164013 status.go:371] multinode-017915 host status = "Stopped" (err=<nil>)
	I1122 00:22:42.349601  164013 status.go:384] host is not running, skipping remaining checks
	I1122 00:22:42.349614  164013 status.go:176] multinode-017915 status: &{Name:multinode-017915 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:22:42.349647  164013 status.go:174] checking status of multinode-017915-m02 ...
	I1122 00:22:42.349863  164013 cli_runner.go:164] Run: docker container inspect multinode-017915-m02 --format={{.State.Status}}
	I1122 00:22:42.366794  164013 status.go:371] multinode-017915-m02 host status = "Stopped" (err=<nil>)
	I1122 00:22:42.366814  164013 status.go:384] host is not running, skipping remaining checks
	I1122 00:22:42.366820  164013 status.go:176] multinode-017915-m02 status: &{Name:multinode-017915-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-017915 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1122 00:23:19.475076   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-017915 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.470786942s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-017915 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-017915
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-017915-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-017915-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.302874ms)

                                                
                                                
-- stdout --
	* [multinode-017915-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-017915-m02' is duplicated with machine name 'multinode-017915-m02' in profile 'multinode-017915'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-017915-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-017915-m03 --driver=docker  --container-runtime=crio: (19.228985561s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-017915
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-017915: exit status 80 (276.812197ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-017915 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-017915-m03 already exists in multinode-017915-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-017915-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-017915-m03: (2.344314141s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.98s)

                                                
                                    
x
+
TestPreload (85.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-417192 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1122 00:24:31.051905   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-417192 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.572915057s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-417192 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-417192
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-417192: (5.82461416s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-417192 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-417192 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (28.376732547s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-417192 image list
helpers_test.go:175: Cleaning up "test-preload-417192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-417192
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-417192: (2.348619732s)
--- PASS: TestPreload (85.16s)

                                                
                                    
x
+
TestScheduledStopUnix (97.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-366786 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-366786 --memory=3072 --driver=docker  --container-runtime=crio: (21.340259268s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366786 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:25:42.005012  180954 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:25:42.005290  180954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:42.005301  180954 out.go:374] Setting ErrFile to fd 2...
	I1122 00:25:42.005305  180954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:42.005471  180954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:25:42.005701  180954 out.go:368] Setting JSON to false
	I1122 00:25:42.005789  180954 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:25:42.006094  180954 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:25:42.006159  180954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/config.json ...
	I1122 00:25:42.006321  180954 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:25:42.006409  180954 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-366786 -n scheduled-stop-366786
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:25:42.374997  181105 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:25:42.375309  181105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:42.375320  181105 out.go:374] Setting ErrFile to fd 2...
	I1122 00:25:42.375326  181105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:42.375579  181105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:25:42.375817  181105 out.go:368] Setting JSON to false
	I1122 00:25:42.375989  181105 daemonize_unix.go:73] killing process 180988 as it is an old scheduled stop
	I1122 00:25:42.376115  181105 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:25:42.376446  181105 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:25:42.376524  181105 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/config.json ...
	I1122 00:25:42.376710  181105 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:25:42.376825  181105 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1122 00:25:42.381890   14585 retry.go:31] will retry after 59.719µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.383074   14585 retry.go:31] will retry after 112.451µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.384197   14585 retry.go:31] will retry after 335.84µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.385363   14585 retry.go:31] will retry after 443.536µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.386484   14585 retry.go:31] will retry after 539.777µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.387627   14585 retry.go:31] will retry after 631.314µs: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.388747   14585 retry.go:31] will retry after 1.679001ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.390972   14585 retry.go:31] will retry after 2.305222ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.394176   14585 retry.go:31] will retry after 3.098556ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.398382   14585 retry.go:31] will retry after 4.601953ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.403586   14585 retry.go:31] will retry after 7.934969ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.411790   14585 retry.go:31] will retry after 10.144472ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.423030   14585 retry.go:31] will retry after 7.318789ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.431270   14585 retry.go:31] will retry after 18.133554ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.450440   14585 retry.go:31] will retry after 22.672773ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
I1122 00:25:42.473670   14585 retry.go:31] will retry after 28.219722ms: open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366786 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1122 00:25:54.118232   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366786 -n scheduled-stop-366786
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-366786
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-366786 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:26:08.218176  181671 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:26:08.218443  181671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:26:08.218455  181671 out.go:374] Setting ErrFile to fd 2...
	I1122 00:26:08.218459  181671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:26:08.218683  181671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:26:08.218958  181671 out.go:368] Setting JSON to false
	I1122 00:26:08.219066  181671 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:26:08.219410  181671 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:26:08.219494  181671 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/scheduled-stop-366786/config.json ...
	I1122 00:26:08.219714  181671 mustload.go:66] Loading cluster: scheduled-stop-366786
	I1122 00:26:08.219837  181671 config.go:182] Loaded profile config "scheduled-stop-366786": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1122 00:26:22.544483   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-366786
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-366786: exit status 7 (80.054629ms)

                                                
                                                
-- stdout --
	scheduled-stop-366786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366786 -n scheduled-stop-366786
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-366786 -n scheduled-stop-366786: exit status 7 (76.26544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-366786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-366786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-366786: (4.733523346s)
--- PASS: TestScheduledStopUnix (97.51s)

                                                
                                    
x
+
TestInsufficientStorage (12.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-310459 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-310459 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.679648649s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c617fe9c-17ee-47ca-83b3-de19bd188117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-310459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5067199f-2709-4dc1-8091-3bf00901d344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"ec046181-1057-4cbc-a68c-95e5536294fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3830005e-36af-47ce-a2be-98e693b6218d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig"}}
	{"specversion":"1.0","id":"48a50a09-ecc0-4d37-85a1-c35c2c8fa959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube"}}
	{"specversion":"1.0","id":"be9f602a-953b-43d5-a639-b0f39e4a63e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"35894f3d-54cd-456b-aeee-a9fa0e431b80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a29fd27-7741-41a2-ab1b-a5b3e027486c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"0a323834-7dde-4d0c-94ef-dd78c840e04f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"962ec2ec-84aa-4bbf-9585-70775275d896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f56ec0b-9d92-4654-b23b-7fb1b43a5dd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"9676b598-92c1-4e69-b120-0ddb593eb711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-310459\" primary control-plane node in \"insufficient-storage-310459\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"44d9e909-b53c-4383-adbb-b2fd66ab5a00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763588073-21934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"eab906cb-f379-4aea-abae-5e61571c0ab1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"93ac7380-96b3-4364-add2-592edc8946b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-310459 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-310459 --output=json --layout=cluster: exit status 7 (275.648648ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-310459","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-310459","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:27:08.058386  184172 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-310459" does not appear in /home/jenkins/minikube-integration/21934-9122/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-310459 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-310459 --output=json --layout=cluster: exit status 7 (269.99295ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-310459","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-310459","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:27:08.329012  184284 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-310459" does not appear in /home/jenkins/minikube-integration/21934-9122/kubeconfig
	E1122 00:27:08.339094  184284 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/insufficient-storage-310459/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-310459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-310459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-310459: (1.850226008s)
--- PASS: TestInsufficientStorage (12.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.187391912 start -p running-upgrade-670577 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.187391912 start -p running-upgrade-670577 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.636288641s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-670577 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-670577 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.500741027s)
helpers_test.go:175: Cleaning up "running-upgrade-670577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-670577
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-670577: (4.341183823s)
--- PASS: TestRunningBinaryUpgrade (68.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.922279291s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-619859
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-619859: (4.788623213s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-619859 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-619859 status --format={{.Host}}: exit status 7 (76.56999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1122 00:29:31.051557   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m25.864931879s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-619859 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.633574ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-619859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-619859
	    minikube start -p kubernetes-upgrade-619859 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6198592 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-619859 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-619859 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.74789722s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-619859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-619859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-619859: (3.969355431s)
--- PASS: TestKubernetesUpgrade (302.53s)

                                                
                                    
x
+
TestMissingContainerUpgrade (68.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2095626222 start -p missing-upgrade-330033 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2095626222 start -p missing-upgrade-330033 --memory=3072 --driver=docker  --container-runtime=crio: (23.350194762s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-330033
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-330033: (4.288844556s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-330033
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-330033 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-330033 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.987144765s)
helpers_test.go:175: Cleaning up "missing-upgrade-330033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-330033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-330033: (2.075617077s)
--- PASS: TestMissingContainerUpgrade (68.27s)

                                                
                                    
x
+
TestPause/serial/Start (53.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-044220 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-044220 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.193156858s)
--- PASS: TestPause/serial/Start (53.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3687948356 start -p stopped-upgrade-220412 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3687948356 start -p stopped-upgrade-220412 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m20.369973939s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3687948356 -p stopped-upgrade-220412 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3687948356 -p stopped-upgrade-220412 stop: (1.96814235s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-220412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-220412 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.79358425s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-044220 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-044220 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.875182311s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-220412
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-220412: (1.117730534s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.90537ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-953061] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (26.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953061 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953061 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.364237091s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-953061 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (26.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.165486625s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-953061 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-953061 status -o json: exit status 2 (293.297048ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-953061","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-953061
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-953061: (1.945444498s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-239758 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-239758 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (169.992152ms)

                                                
                                                
-- stdout --
	* [false-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:29:46.698484  225921 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:29:46.698569  225921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:29:46.698580  225921 out.go:374] Setting ErrFile to fd 2...
	I1122 00:29:46.698586  225921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:29:46.698799  225921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9122/.minikube/bin
	I1122 00:29:46.699249  225921 out.go:368] Setting JSON to false
	I1122 00:29:46.700270  225921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4336,"bootTime":1763767051,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:29:46.700329  225921 start.go:143] virtualization: kvm guest
	I1122 00:29:46.702091  225921 out.go:179] * [false-239758] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:29:46.703248  225921 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:29:46.703251  225921 notify.go:221] Checking for updates...
	I1122 00:29:46.705637  225921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:29:46.707967  225921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9122/kubeconfig
	I1122 00:29:46.709092  225921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9122/.minikube
	I1122 00:29:46.710270  225921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:29:46.711447  225921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:29:46.713181  225921 config.go:182] Loaded profile config "NoKubernetes-953061": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1122 00:29:46.713304  225921 config.go:182] Loaded profile config "cert-expiration-624739": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:29:46.713436  225921 config.go:182] Loaded profile config "kubernetes-upgrade-619859": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:29:46.713539  225921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:29:46.737554  225921 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:29:46.737670  225921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:29:46.801156  225921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-22 00:29:46.790843928 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:29:46.801269  225921 docker.go:319] overlay module found
	I1122 00:29:46.803225  225921 out.go:179] * Using the docker driver based on user configuration
	I1122 00:29:46.804343  225921 start.go:309] selected driver: docker
	I1122 00:29:46.804359  225921 start.go:930] validating driver "docker" against <nil>
	I1122 00:29:46.804370  225921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:29:46.805826  225921 out.go:203] 
	W1122 00:29:46.807091  225921 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1122 00:29:46.808090  225921 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-239758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-953061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-624739
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-619859
contexts:
- context:
cluster: NoKubernetes-953061
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-953061
name: NoKubernetes-953061
- context:
cluster: cert-expiration-624739
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-624739
name: cert-expiration-624739
- context:
cluster: kubernetes-upgrade-619859
user: kubernetes-upgrade-619859
name: kubernetes-upgrade-619859
current-context: NoKubernetes-953061
kind: Config
users:
- name: NoKubernetes-953061
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.key
- name: cert-expiration-624739
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key
- name: kubernetes-upgrade-619859
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-239758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-239758"

                                                
                                                
----------------------- debugLogs end: false-239758 [took: 3.019545968s] --------------------------------
helpers_test.go:175: Cleaning up "false-239758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-239758
--- PASS: TestNetworkPlugins/group/false (3.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953061 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.672487061s)
--- PASS: TestNoKubernetes/serial/Start (6.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21934-9122/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-953061 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-953061 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.470067ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.685612212s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (46.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.758790602s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (46.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-953061
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-953061: (1.545381646s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-953061 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-953061 --driver=docker  --container-runtime=crio: (6.285422036s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-953061 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-953061 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.852724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (49.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.572731524s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (49.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-377321 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b5061100-b7c0-483b-a449-40e98a2335f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b5061100-b7c0-483b-a449-40e98a2335f6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.00374895s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-377321 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-377321 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-377321 --alsologtostderr -v=3: (15.988007718s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983546 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fb74c704-d21d-4567-8e3f-cfa2d8132aa9] Pending
helpers_test.go:352: "busybox" [fb74c704-d21d-4567-8e3f-cfa2d8132aa9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fb74c704-d21d-4567-8e3f-cfa2d8132aa9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003682427s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983546 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321: exit status 7 (82.137509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-377321 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-377321 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (43.862482425s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-377321 -n old-k8s-version-377321
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-983546 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-983546 --alsologtostderr -v=3: (16.289740766s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m9.858690818s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546: exit status 7 (79.731569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-983546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-983546 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.138715633s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-983546 -n no-preload-983546
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8fvls" [80fdd4a9-2931-48e7-8084-644a5da2b47b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003184727s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8fvls" [80fdd4a9-2931-48e7-8084-644a5da2b47b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003531017s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-377321 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-377321 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fb2ss" [c903c3de-d57d-4f5d-9a37-79b8cd83c15c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003409799s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.601273728s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fb2ss" [c903c3de-d57d-4f5d-9a37-79b8cd83c15c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00262959s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-983546 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-983546 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-084979 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ed303111-f811-473a-89a1-52a608759f93] Pending
helpers_test.go:352: "busybox" [ed303111-f811-473a-89a1-52a608759f93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ed303111-f811-473a-89a1-52a608759f93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004215033s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-084979 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.152652817s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-084979 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-084979 --alsologtostderr -v=3: (16.210952371s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [865d2f9a-32be-473d-8149-08e560d58cdf] Pending
helpers_test.go:352: "busybox" [865d2f9a-32be-473d-8149-08e560d58cdf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [865d2f9a-32be-473d-8149-08e560d58cdf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.007127522s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979: exit status 7 (86.048762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-084979 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (43.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 00:33:19.474823   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/addons-386094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-084979 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.325862349s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-084979 -n embed-certs-084979
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (43.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-531189 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-531189 --alsologtostderr -v=3: (8.01918101s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-046175 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-046175 --alsologtostderr -v=3: (16.445193393s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189: exit status 7 (91.040457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-531189 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531189 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.079401018s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531189 -n newest-cni-531189
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531189 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175: exit status 7 (94.079359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-046175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-046175 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.029586059s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-046175 -n default-k8s-diff-port-046175
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (44.448131288s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.185785222s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qrrmd" [e0fbb25a-db5f-4d07-9c19-7181a408010c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003405071s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qrrmd" [e0fbb25a-db5f-4d07-9c19-7181a408010c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006085902s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-084979 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-084979 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.935827273s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqktd" [c7691f68-4748-403c-b999-decb49f55769] Running
E1122 00:34:31.051802   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/functional-159819/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00449484s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jqktd" [c7691f68-4748-403c-b999-decb49f55769] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003045237s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-046175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-239758 "pgrep -a kubelet"
I1122 00:34:38.590795   14585 config.go:182] Loaded profile config "auto-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-239758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjlhc" [d459bb7c-5e5d-41ee-9d41-38a71e7fa5a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjlhc" [d459bb7c-5e5d-41ee-9d41-38a71e7fa5a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004156293s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-046175 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wmb78" [90ceb9ef-eb4e-4cd9-97b5-1c95ad408f96] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003800934s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-239758 "pgrep -a kubelet"
I1122 00:34:49.751228   14585 config.go:182] Loaded profile config "kindnet-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-239758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5pzw2" [971cceed-c774-4f9a-a451-1f6e9da9a93d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5pzw2" [971cceed-c774-4f9a-a451-1f6e9da9a93d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005027345s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (58.129308805s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.765658184s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9jpwm" [3b433e5f-befa-430f-9f72-40090bc37c1d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-9jpwm" [3b433e5f-befa-430f-9f72-40090bc37c1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004260395s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (51.322403095s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-239758 "pgrep -a kubelet"
I1122 00:35:23.245472   14585 config.go:182] Loaded profile config "calico-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-239758 replace --force -f testdata/netcat-deployment.yaml
I1122 00:35:23.721932   14585 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1122 00:35:23.885404   14585 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-blzjf" [55a434bc-83cc-47bd-9216-e71cc7345bb1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-blzjf" [55a434bc-83cc-47bd-9216-e71cc7345bb1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003580231s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-239758 "pgrep -a kubelet"
I1122 00:35:49.204844   14585 config.go:182] Loaded profile config "custom-flannel-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-239758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tp2j4" [6ab1db61-5a32-4c2a-8ae4-f7fdbc4ec56b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tp2j4" [6ab1db61-5a32-4c2a-8ae4-f7fdbc4ec56b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003712917s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-239758 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (42.598306177s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tk7sn" [21e1ec79-4b51-44f8-ac70-86ac8117eb03] Running
E1122 00:36:12.221214   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/old-k8s-version-377321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003168372s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-239758 "pgrep -a kubelet"
I1122 00:36:17.220878   14585 config.go:182] Loaded profile config "flannel-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-239758 replace --force -f testdata/netcat-deployment.yaml
E1122 00:36:17.343405   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/old-k8s-version-377321/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xvlbv" [3c817b36-a841-4f25-be02-c164d1ef5013] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xvlbv" [3c817b36-a841-4f25-be02-c164d1ef5013] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002618978s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-239758 "pgrep -a kubelet"
I1122 00:36:18.708395   14585 config.go:182] Loaded profile config "enable-default-cni-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-239758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ntqdk" [a9212ada-83a2-43d1-8c64-21b0dbf163a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ntqdk" [a9212ada-83a2-43d1-8c64-21b0dbf163a5] Running
E1122 00:36:24.977468   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:24.983828   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:24.995187   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:25.016500   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:25.057797   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:25.139124   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:25.300593   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:25.622548   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:26.263812   14585 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/no-preload-983546/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003492566s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-239758 "pgrep -a kubelet"
I1122 00:36:37.873529   14585 config.go:182] Loaded profile config "bridge-239758": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-239758 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sp5mn" [f013dc09-f513-41a6-b0e0-f6641d0ab1ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sp5mn" [f013dc09-f513-41a6-b0e0-f6641d0ab1ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004532811s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-239758 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-239758 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-751225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-751225
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-239758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-953061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-624739
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-619859
contexts:
- context:
cluster: NoKubernetes-953061
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-953061
name: NoKubernetes-953061
- context:
cluster: cert-expiration-624739
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-624739
name: cert-expiration-624739
- context:
cluster: kubernetes-upgrade-619859
user: kubernetes-upgrade-619859
name: kubernetes-upgrade-619859
current-context: NoKubernetes-953061
kind: Config
users:
- name: NoKubernetes-953061
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.key
- name: cert-expiration-624739
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key
- name: kubernetes-upgrade-619859
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-239758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-239758"

                                                
                                                
----------------------- debugLogs end: kubenet-239758 [took: 3.394646121s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-239758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-239758
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-239758 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-239758" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: NoKubernetes-953061
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-624739
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9122/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-619859
contexts:
- context:
cluster: NoKubernetes-953061
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-953061
name: NoKubernetes-953061
- context:
cluster: cert-expiration-624739
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:28:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-624739
name: cert-expiration-624739
- context:
cluster: kubernetes-upgrade-619859
user: kubernetes-upgrade-619859
name: kubernetes-upgrade-619859
current-context: NoKubernetes-953061
kind: Config
users:
- name: NoKubernetes-953061
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/NoKubernetes-953061/client.key
- name: cert-expiration-624739
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/cert-expiration-624739/client.key
- name: kubernetes-upgrade-619859
user:
client-certificate: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.crt
client-key: /home/jenkins/minikube-integration/21934-9122/.minikube/profiles/kubernetes-upgrade-619859/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-239758

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-239758" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-239758"

                                                
                                                
----------------------- debugLogs end: cilium-239758 [took: 3.307769065s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-239758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-239758
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
Copied to clipboard